Página 1 dos resultados de 43768 itens digitais encontrados em 0.075 segundos

Um estudo sobre qualidade de dados em biodiversidade: aplicação a um sistema de digitalização de ocorrências de espécies; A study about data quality in biodiversity: application to a species ocurrences digitization system

Veiga, Allan Koch
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 09/02/2012 PT
Relevância na Pesquisa
65.8%
Para o combate da atual crise de sustentabilidade ambiental, diversos estudos sobre a biodiversidade e o meio ambiente têm sido realizados com o propósito de embasar estratégias eficientes de conservação e uso de recursos naturais. Esses estudos são fundamentados em avaliações e monitoramentos da biodiversidade que ocorrem por meio da coleta, armazenamento, análise, simulação, modelagem, visualização e intercâmbio de um volume expressivo de dados sobre a biodiversidade em amplo escopo temporal e espacial. Dados sobre ocorrências de espécies são um tipo de dado de biodiversidade particularmente importante, pois são amplamente utilizados em diversos estudos. Contudo, para que as análises e os modelos gerados a partir desses dados sejam confiáveis, os dados utilizados devem ser de alta qualidade. Assim, para melhorar a Qualidade de Dados (QD) sobre ocorrências de espécies, o objetivo deste trabalho foi realizar um estudo sobre QD aplicado a dados de ocorrências de espécies que permitisse avaliar e melhorar a QD por meio de técnicas e recursos de prevenção a erros. O estudo foi aplicado a um Sistema de Informação (SI) de digitalização de dados de ocorrências de espécies, o Biodiversity Data Digitizer (BDD)...

Avaliação da qualidade do dado espacial digital de acordo com parâmetros estabelecidos por usuários.; Digital spatial data quality evaluation based on users parameters.

Salisso Filho, João Luiz
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 02/05/2013 PT
Relevância na Pesquisa
65.87%
Informações espaciais estão cada vez mais disseminadas no cotidiano do cidadão comum, de empresas e de instituições governamentais. Aplicações como o Google Earth, Bing Maps, aplicativos de localização por GPS, entre outros apresentam a informação espacial como uma commodity. Cada vez mais empresas públicas e privadas incorporam o dado espacial em seu processo decisório, tornando ainda mais crítico a questão da qualidade deste tipo de dado. Dada a natureza multidisciplinar e, principalmente, o volume de informações disponibilizadas para os usuários, faz-se necessário apresentar um método de avaliação de dados apoiado por processos computacionais, que permita ao usuário avaliar a verdadeira adequação que tais dados têm frente ao uso pretendido. Nesta Dissertação de Mestrado propõe-se uma metodologia estruturada de avaliação de dados espaciais apoiada por computador. A metodologia utilizada, baseada em normas apresentadas pela International Standards Organization (ISO), permite ao usuário de dados espaciais avaliar sua qualidade comparando a qualidade do dado de acordo com os parâmetros estabelecidos pelo próprio usuário. Também permite ao usuário comparar a qualidade apresentada pelo dado espacial com a informação de qualidade provida pelo produtor do dado. Desta forma...

Data quality assessment in performance measurement

Sousa, Sérgio; Nunes, Eusébio P.; Lopes, Isabel da Silva
Fonte: IAENG Publicador: IAENG
Tipo: Conferência ou Objeto de Conferência
Publicado em //2012 ENG
Relevância na Pesquisa
65.78%
Data quality is a multi-dimensional concept and this research will explore its impact in performance measurement systems. Despite the large numbers of publications on the design of performance measurement systems (PMSs) and the definition of critical success factors to develop Performance Measures (PMs), from the data user perspective there are possibilities of finding data quality problems, that may have a negative impact in decision making. This work identifies and classifies uncertainty components of PMSs, and proposes a qualitative method for PMs’ quality assessment. Fuzzy numbers are used to represent PMs’ uncertainty and a method is proposed to calculate an indicator of the compliance between a PM and a target value that can serve as a risk indicator for the decision-maker.; FEDER - Programa Operacional Fatores de Competitividade (COMPETE); Fundação para a Ciência e Tecnologia (FCT) - FCOMP-01-0124-FEDER

Improvements in data quality for decision support in intensive care

Portela, Filipe; Vilas-Boas, Marta; Santos, Manuel Filipe; Rua, Fernando
Fonte: Springer Publicador: Springer
Tipo: Parte de Livro
Publicado em //2012 ENG
Relevância na Pesquisa
65.8%
Nowadays, there is a plethora of technology in hospitals and, in particular, in intensive care units. The clinical data produced everyday can be integrated in a decision support system in real-time to improve quality of care of the critically ill patients. However, there are many sensitive aspects that must be taken into account, mainly the data quality and the integration of heterogeneous data sources. This paper presents INTCare, an Intelligent Decision Support System for Intensive Care in real-time and addresses the previous aspects, in particular, the development of an Electronic Nursing Record and the improvements in the quality of monitored data.; Fundação para a Ciência e a Tecnologia (FCT)

Real-time decision support in intensive medicine : an intelligent approach for monitoring data quality

Portela, Filipe; Santos, Manuel Filipe; Machado, José Manuel; Abelha, António; Silva, Álvaro; Rua, Fernando
Fonte: ETPublishing Publicador: ETPublishing
Tipo: Artigo de Revista Científica
Publicado em /03/2013 ENG
Relevância na Pesquisa
65.86%
Intensive Medicine is an area where big amounts of data are generated every day. The process to obtain knowledge from these data is extremely difficult and sometimes dangerous. The main obstacles of this process are the number of data collected manually and the quality of the data collected automatically. Information quality is a major constrain to the success of Intelligent Decision Support Systems (IDSS). This is the case of INTCare an IDSS which operates in real-time. Data quality needs to be ensured in a continuous way. The quality must be assured essentially in the data acquisition process and in the evaluation of the results obtained from data mining models. To automate this process a set of intelligent agents have been developed to perform a set of data quality tasks. This paper explores the data quality issues in IDSS and presents an intelligent approach for monitoring the data quality in INTCare system.; Fundação para a Ciência e a Tecnologia (FCT)

Semantic Similarity Match for Data Quality

Martins, Fernando; Falcão, André; Couto, Francisco M.
Fonte: Department of Informatics, University of Lisbon Publicador: Department of Informatics, University of Lisbon
Tipo: Relatório
Publicado em /10/2007 POR
Relevância na Pesquisa
65.77%
Data quality is a critical aspect of applications that support business operations. Often entities are represented more than once in data repositories. Since duplicate records do not share a common key, they are hard to detect. Duplicate detection over text is usually performed using lexical approaches, which do not capture text sense. The difficulties increase when the duplicate detection must be performed using the text sense. This work presents a semantic similarity approach, based on a text sense matching mechanism, that performs the detection of text units which are similar in sense. The goal of the proposed semantic similarity approach is therefore to perform the duplicate detection task in a data quality process

Corporate Data Quality Management : From Theory to Practice

Lucas, Ana
Fonte: Proceedings of 5ª Conferência Ibérica de Sistemas e Tecnologias de Information Publicador: Proceedings of 5ª Conferência Ibérica de Sistemas e Tecnologias de Information
Tipo: Conferência ou Objeto de Conferência
Publicado em //2010 ENG
Relevância na Pesquisa
65.87%
It is now assumed that poor quality data is costing large amounts of money to corporations all over the world. Although research on methods and techniques for data quality assessment and improvement have begun in the early nineties of the past century and being currently abundant and innovative, it is noted that the academic and professional communities virtually have no dialogue, which turns out to be harmful to both of them. The challenge of promoting the relevance in information systems research, without compromising the necessary rigor, is still present in the various disciplines of information systems scientific area [1,2], including the data quality one. In this paper we present "data as a corporate asset" as a business philosophy, and a framework for the concepts related to that philosophy, derived from the academic and professional literature. According to this framework, we present, analyze and discuss a single explanatory case study, developed in a fixed and mobile telecommunications company, operating in one of the European Union Countries. The results show that, in the absence of data stewardship roles, data quality problems become more of an "IT problem" than typically is considered in the literature, owing to Requirements Analysis Teams of the IS Development Units...

Improving Data Quality Through Effective Use of Data Semantics

Madnick, Stuart E.
Fonte: MIT - Massachusetts Institute of Technology Publicador: MIT - Massachusetts Institute of Technology
Tipo: Artigo de Revista Científica Formato: 227013 bytes; application/pdf
EN_US
Relevância na Pesquisa
65.78%
Data quality issues have taken on increasing importance in recent years. In our research, we have discovered that many “data quality” problems are actually “data misinterpretation” problems – that is, problems with data semantics. In this paper, we first illustrate some examples of these problems and then introduce a particular semantic problem that we call “corporate householding.” We stress the importance of “context” to get the appropriate answer for each task. Then we propose an approach to handle these tasks using extensions to the COntext INterchange (COIN) technology for knowledge storage and knowledge processing.; Singapore-MIT Alliance (SMA)

A comparison of data quality and practicality of online versus postal questionnaires in a sample of testicular cancer survivors

Smith, A.; King, M.; Butow, P.; Olver, I.
Fonte: John Wiley & Sons Ltd Publicador: John Wiley & Sons Ltd
Tipo: Artigo de Revista Científica
Publicado em //2013 EN
Relevância na Pesquisa
65.82%
OBJECTIVE: We aimed to compare data quality from online and postal questionnaires and to evaluate the practicality of these different questionnaire modes in a cancer sample. METHODS: Participants in a study investigating the psychosocial sequelae of testicular cancer could choose to complete a postal or online version of the study questionnaire. Data quality was evaluated by assessing sources of nonobservational errors such as participant nonresponse, item nonresponse and sampling bias. Time taken and number of reminders required for questionnaire return were used as indicators of practicality. RESULTS: Participant nonresponse was significantly higher among participants who chose the postal questionnaire. The proportion of questionnaires with missing items and the mean number of missing items did not differ significantly by mode. A significantly larger proportion of tertiary-educated participants and managers/professionals completed the online questionnaire. There were no significant differences in age, relationship status, employment status, country of birth or language spoken by completion mode. Compared with postal questionnaires, online questionnaires were returned significantly more quickly and required significantly fewer reminders. CONCLUSIONS: These results demonstrate that online questionnaire completion can be offered in a cancer sample without compromising data quality. In fact...

Data Quality of Fleet Management Systems in Open Pit Mining: Issues and Impacts on Key Performance Indicators for Haul Truck Fleets

Hsu, Nick
Fonte: Quens University Publicador: Quens University
Tipo: Tese de Doutorado
EN; EN
Relevância na Pesquisa
65.81%
Open pit mining operations typically rely upon data from a Fleet Management Systems (FMS) in order to calculate Key Performance Indicators (KPI’s). For production and maintenance planning and reporting purposes, these KPI’s typically include Mechanical Availability, Physical Availability, Utilization, Production Utilization, Effective Utilization, and Capital Effectiveness, as well as Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR). This thesis examined the datasets from FMS’s from two different software vendors. For each FMS, haul truck fleet data from a separate mine site was analyzed. Both mine sites had similar haul trucks, and similar fleet sizes. From a qualitative perspective, it was observed that inconsistent labelling (assignment) of activities to time categories is a major impediment to FMS data quality. From a quantitative perspective, it was observed that the datasets from both FMS vendors contained a surprisingly high proportion of very short duration states, which are indicative of either data corruption (software / hardware issues) or human error (operator input issues) – which further compromised data quality. In addition, the datasets exhibited a mismatch (i.e. lack of one-to-one correspondence) between Repair events and Unscheduled Maintenance Down Time states...

Analysis of Data Quality and Information Quality Problems in Digital Manufacturing

WANG, Keqin; TONG, Shurong; ROUCOULES, Lionel; EYNARD, Benoît
Fonte: IEEE Publicador: IEEE
EN
Relevância na Pesquisa
65.85%
 ; This work focuses on the increasing importance of data quality in organizations, especially in digital manufacturing companies. The paper firstly reviews related works in field of data quality, including definition, dimensions, measurement and assessment, and improvement of data quality. Then, by taking the digital manufacturing as research object, the different information roles, information manufacturing processes, influential factors of information quality, and the transformation levels and paths of the data/information quality in digital manufacturing companies are analyzed. Finally an approach for the diagnosis, control and improvement of data/information quality in digital manufacturing companies, which is the basis for further works, is proposed.;  

Corporate data quality management in context

Lucas, Ana
Fonte: Proceedings of the 15th International Conference on Information Quality Publicador: Proceedings of the 15th International Conference on Information Quality
Tipo: Conferência ou Objeto de Conferência
Publicado em //2010 ENG
Relevância na Pesquisa
75.86%
Presently, we are well aware that poor quality data is costing large amounts of money to corporations all over the world. Nevertheless, little research has been done about the way Organizations are dealing with data quality management and the strategies they are using. This work aims to find some answers to the following questions: which business drivers motivate the organizations to engage in a data quality management initiative?, how do they implement data quality management? and which objectives have been achieved, so far? Due to the kind of research questions involved, a decision was made to adopt the use of multiple exploratory case studies as research strategy [32]. The case studies were developed in a telecommunications company (MyTelecom), a public bank (PublicBank) and in the central bank (CentralBank) of one European Union Country. The results show that the main drivers to data quality (DQ) initiatives were the reduction in non quality costs, risk management, mergers, and the improvement of the company's image among its customers, those aspects being in line with literature [7, 8, 20]. The commercial corporations (MyTelecom and PublicBank) began their DQ projects with customer data, this being in accordance with literature [18]...

Communicating thematic data quality with web map services

Blower, Jon D.; Masó Pau, Joan; Díaz Benito, Daniel; Roberts, Charles J.; Griffiths, Guy H.; Lewis, Jane P.; Yang, Xiaoyu; Pons, Xavier
Fonte: Universidade Autônoma de Barcelona Publicador: Universidade Autônoma de Barcelona
Tipo: Artigo de Revista Científica Formato: application/pdf
Publicado em //2015 ENG
Relevância na Pesquisa
65.81%
Geospatial information of many kinds, from topographic maps to scientific data, is increasingly being made available through web mapping services. These allow georeferenced map images to be served from data stores and displayed in websites and geographic information systems, where they can be integrated with other geographic information. The Open Geospatial Consortium’s Web Map Service (WMS) standard has been widely adopted in diverse communities for sharing data in this way. However, current services typically provide little or no information about the quality or accuracy of the data they serve. In this paper we will describe the design and implementation of a new “quality-enabled” profile of WMS, which we call “WMS-Q”. This describes how information about data quality can be transmitted to the user through WMS. Such information can exist at many levels, from entire datasets to individual measurements, and includes the many different ways in which data uncertainty can be expressed. We also describe proposed extensions to the Symbology Encoding specification, which include provision for visualizing uncertainty in raster data in a number of different ways, including contours, shading and bivariate colour maps. We shall also describe new open-source implementations of the new specifications...

Data Quality in Analytics: Key Problems Arising from the Repurposing of Manufacturing Data

Woodall, Philip; Wainman, Anthony
Fonte: MIT ICIQ Publicador: MIT ICIQ
Tipo: Article; accepted version
EN
Relevância na Pesquisa
65.87%
This is the author accepted manuscript. It was first presented at the 20th Annual MIT International Conference on Information Quality.; Repurposing data is when data is used for a completely different decision/task to what it was originally intended to be used for. This is often the case in data analytics when data, which has been captured by the business as part of its normal operations, is used by data scientists to derive business insight. However, when data is collected for its primary purpose some consideration is given to ensure that the level of data quality is ?fit for purpose?. Data repurposing, by definition, is using data for a different purpose, and therefore the original quality levels may not be suitable for the secondary purpose. Using interviews with various manufacturers, this paper describes examples of repurposing in manufacturing, how manufacturing organisations repurpose data, data quality problems that arise specifically from this, and how the problems are currently addressed. From these results we present a framework which manufacturers can use to help identify and mitigate the issues caused when attempting to repurpose data.

Data quality knowledge management: A Tool for the collection and organization of metadata in a data warehouse

Neely, M. Pamela
Fonte: Eighth Americas Conference on Information Systems Publicador: Eighth Americas Conference on Information Systems
Tipo: Proceedings
EN_US
Relevância na Pesquisa
65.81%
This paper describes a relational database tool, the Data Quality Knowledge Management (DQKM), which captures and organizes the metadata associated with a data warehouse project. It builds on the concept of fitness for use by describing a measurement technique for subjectively assigning a measure to a data field based on the use and quality dimension of the dat within the data warehouse. This measurement can then be compared to some minimum criteria, below which it is not cost effective to enhance the quality of the data. This tool can be use to make resource allocation decisions and get the greatest benefit for the cost in utilizing the scarce resources available to enhance source data for a data warehouse.

Data quality and the data warehouse: A Decision support system for allocation of scarce resources

Neely, M. Pamela
Fonte: Proceedings of the Eleventh Americas Conference on Information Systems Publicador: Proceedings of the Eleventh Americas Conference on Information Systems
Tipo: Proceedings
EN_US
Relevância na Pesquisa
65.79%
This paper describes a decision support system (DSS) for use in allocating scarce resources associated with data quality efforts in the construction of a data warehouse. The DSS is populated with metadata from a data warehouse project, including tags that identify the quality at intersections of data field, data use, and data dimensions. The resulting DSS can then experiment using business students are then presented. It can be shown that, given the proper set of skills, business students, as proxies for novices on a data warehouse development team, can effectively use the tool to analyze and prioritize hundreds of potential fields in a data warehouse project.

The Product approach to data quality and fitness for use: A Framework for analysis

Neely, M. Pamela
Fonte: Proceedings of the Tenth International Conference on Information Quality (ICIQ-05) Publicador: Proceedings of the Tenth International Conference on Information Quality (ICIQ-05)
Tipo: Proceedings
EN_US
Relevância na Pesquisa
75.85%
The value of management decisions, the security of our nation, and the very foundations of our business integrity are all dependent on the quality of data and information. However, the quality of the data and information is dependent on how that data or information will be used. This paper proposes a theory of data quality based on the five principles defined by J. M. Juran for product and service quality and extends Wang et al's 1995 framework for data quality research. It then examines the data and information quality literature from journals within the context of this framework.

The Deficiencies of current data quality tools in the realm of engineering asset management

Neely, M. Pamela; Lin, Shien; Gao, Jing; Koronios, Andy
Fonte: Proceedings of the Twelfth Americas Conference in Information Systems Publicador: Proceedings of the Twelfth Americas Conference in Information Systems
Tipo: Artigo de Revista Científica
EN_US
Relevância na Pesquisa
65.8%
Data and information quality is a well-established research topic and gradually appears on the decision-makers' top concern lists. Many studies have been conducted on how to investigate the generic data/information quality issues and factors by providing a high-level abstract framework or model. Based on the previous studies, the researchers of this paper tried to discuss the actual data quality problems with the operation-level and middle-level managers in engineering asset management and reviewing the existing data cleansing software tools against real engineering asset databases, the deficiencies of the existing data cleansing approach are highlighted.

Assessing Data Quality of Integrated Data by Quality Aggregation of its Ancestors

Angeles,Maria del Pilar; MacKinnon,Lachlan Mhor
Fonte: Centro de Investigación en computación, IPN Publicador: Centro de Investigación en computación, IPN
Tipo: Artigo de Revista Científica Formato: text/html
Publicado em 01/03/2010 EN
Relevância na Pesquisa
65.86%
Data Quality is degraded during the process of extracting and merging data from multiple heterogeneous sources. Besides, users have no information regarding the quality of the accessed data. This document presents the methods utilized to assess data quality at multiple levels of granularity, including derived non-atomic data, considering data provenance. The Data Quality Manager prototype has been implemented and tested to prove such assessment.

WIM calibration and data quality management

Wet,D P G de
Fonte: Journal of the South African Institution of Civil Engineering Publicador: Journal of the South African Institution of Civil Engineering
Tipo: Artigo de Revista Científica Formato: text/html
Publicado em 01/10/2010 EN
Relevância na Pesquisa
65.79%
Weigh-in-motion (WIM) scales are installed on various higher order roads in South Africa to provide traffic loading information for pavement design, strategic planning and law enforcement. Some WIM systems produce anomalies that cannot be satisfactorily explained even by highly experienced professionals. Much of the problem relates to the difficulty in determining the appropriate calibration factors to correct systematic measurement error for WIM systems and the inadequacy of data quality management methods. The author has developed a post-calibration method for WIM data, called the Truck Tractor (TT) method, to correct the magnitude of recorded axle loads in retrospect. In addition, it incorporates a series of data quality checks. The TT method is robust, accurate and adequately simple for use on a routine basis for a wide variety of South African WIM systems. The calibration module of the TT method (i.e. the procedure to determine the calibration factor, kTT) has been accepted by SANRAL and incorporated into the model it uses to quantify the cost of overloading on toll concessions. Some of the data quality checking concepts are also being considered for further use and threshold values for tests are being refined by SANRAL for this purpose.