Página 1 dos resultados de 259 itens digitais encontrados em 0.111 segundos

## Taking Notes on PDAs with Shared Text Input

Fonte: Escola de Pós-Graduação Naval
Publicador: Escola de Pós-Graduação Naval

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

35.64%

This paper presents a system designed to support note taking on a wirelessly connected
PDA in a classroom. The system leverages the devices’ wireless connectivity to allow students to
share their notes in real time, allowing individuals to quickly reuse words from their fellow note
takers. In addition, presentation material such as Powerpoint slides are also extracted when
presented by the teacher, giving students further means to reusing words. We describe the system
and report on the findings of an initial user study where the system has been used for four months
during a graduate level course on wireless mobile computing with 20 students.

Link permanente para citações:

## Memórias associativas L-fuzzy com ênfase em memórias associativas fuzzy intervalares; L-fuzzy associative memories with an emphasis on interval-valued fuzzy associative memories

Fonte: Biblioteca Digital da Unicamp
Publicador: Biblioteca Digital da Unicamp

Tipo: Dissertação de Mestrado
Formato: application/pdf

Publicado em 28/01/2015
PT

Relevância na Pesquisa

45.7%

#Sistemas fuzzy#Memória associativa#Morfologia matemática#Conjuntos fuzzy#Previsão de series temporais#Fuzzy systems#Associative memory#Mathematical morphology#Fuzzy sets#Time-series forecasting

As últimas décadas têm testemunhado a emergência de uma variedade de abordagens à resolução de problemas com base na computação em reticulados como, por exemplo, as redes neurais morfológicas e os modelos neurocomputação e de raciocínio fuzzy em reticulados. Usamos aqui o termo "reticulado'' no sentido dado no trabalho seminal de Birkhoff. A teoria dos reticulados nasceu da álgebra booleana e tem um grande leque de aplicações como a análise de conceitos formais, a inteligência computacional, a teoria dos conjuntos fuzzy e a morfologia matemática (MM). A MM em reticulados completos representa a base teórica para uma série de modelos de inteligência computacional conhecidos como redes neurais morfológicas (MNNs), que incluem as memórias associativas morfológicas em tons de cinza e as memórias associativas morfológicas fuzzy (FMAMs). As últimas décadas têm testemunhado a emergência de uma variedade de abordagens à resolução de problemas com base na computação em reticulados como, por exemplo, as redes neurais morfológicas e os modelos neurocomputação e de raciocínio fuzzy em reticulados. Usamos aqui o termo "reticulado'' no sentido dado no trabalho seminal de Birkhoff. A teoria dos reticulados nasceu da álgebra booleana e tem um grande leque de aplicações como a análise de conceitos formais...

Link permanente para citações:

## On evaluation of XML documents using fuzzy linguistic techniques

Fonte: ISKO Conference
Publicador: ISKO Conference

Tipo: Artigo de Revista Científica

ENG

Relevância na Pesquisa

65.74%

Recommender systems evaluate and filter the great amount of information available on the Web to assist people in their search processes. A fuzzy evaluation method of XML documents based on computing with words is presented. Given an XML document type (e.g. scientific article), we consider that its elements are not equally informative. This is indicated by the use of a DTD and defining linguistic importance attributes to the more meaningful elements of the DTD designed. Then, the evaluation method generates linguistic recommendations from linguistic evaluation judgements provided by different recommenders on meaningful elements of DTD.

Link permanente para citações:

## Evaluating the informative quality of web sites by fuzzy computing with words

Fonte: Atlantic Web Intelligence Conference
Publicador: Atlantic Web Intelligence Conference

Tipo: Artigo de Revista Científica

ENG

Relevância na Pesquisa

95.85%

In this paper we present a method based on fuzzy computing with words to measure the informative quality of Web sites used to publish information stored in XML documents. This method generates linguistic recommendations on the informative quality of Web sites. This method is made up of both an evaluation scheme to analyze the informative quality of such Web sites and a generation method of linguistic recommendations. The evaluation scheme presents both technical criteria of Web site design and criteria related to the content of information
of Web sites. It is oriented to the user because the chosen criteria are user friendly, in such a way that visitors to a Web site can assess them by means of linguistic evaluation judgements. The generation method generates linguistic recommendations of Web sites based on those linguistic
evaluation judgements using the LOWA and LWA operators. Then, when a user looks for information on the Web we can help him/her with both recommendations on Web sites which store the retrieved documents and also recommendations on other Web sites which store other documents of interest related to his/her information needs. With this proposal information filtering and evaluation possibilities on the Web are increased.

Link permanente para citações:

## A Fuzzy Petri Nets Model for Computing With Words

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 19/10/2009

Relevância na Pesquisa

66.05%

Motivated by Zadeh's paradigm of computing with words rather than numbers,
several formal models of computing with words have recently been proposed.
These models are based on automata and thus are not well-suited for concurrent
computing. In this paper, we incorporate the well-known model of concurrent
computing, Petri nets, together with fuzzy set theory and thereby establish a
concurrency model of computing with words--fuzzy Petri nets for computing with
words (FPNCWs). The new feature of such fuzzy Petri nets is that the labels of
transitions are some special words modeled by fuzzy sets. By employing the
methodology of fuzzy reasoning, we give a faithful extension of an FPNCW which
makes it possible for computing with more words. The language expressiveness of
the two formal models of computing with words, fuzzy automata for computing
with words and FPNCWs, is compared as well. A few small examples are provided
to illustrate the theoretical development.; Comment: double columns 14 pages, 8 figures

Link permanente para citações:

## Algebraic synchronization criterion and computing reset words

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

35.64%

We refine a uniform algebraic approach for deriving upper bounds on reset
thresholds of synchronizing automata. We express the condition that an
automaton is synchronizing in terms of linear algebra, and obtain upper bounds
for the reset thresholds of automata with a short word of a small rank. The
results are applied to make several improvements in the area.
We improve the best general upper bound for reset thresholds of finite prefix
codes (Huffman codes): we show that an $n$-state synchronizing decoder has a
reset word of length at most $O(n \log^3 n)$. In addition to that, we prove
that the expected reset threshold of a uniformly random synchronizing binary
$n$-state decoder is at most $O(n \log n)$. We also show that for any non-unary
alphabet there exist decoders whose reset threshold is in $\varTheta(n)$.
We prove the \v{C}ern\'{y} conjecture for $n$-state automata with a letter of
rank at most $\sqrt[3]{6n-6}$. In another corollary, based on the recent
results of Nicaud, we show that the probability that the \v{C}ern\'y conjecture
does not hold for a random synchronizing binary automaton is exponentially
small in terms of the number of states, and also that the expected value of the
reset threshold of an $n$-state random synchronizing binary automaton is at
most $n^{3/2+o(1)}$.
Moreover...

Link permanente para citações:

## Efficient Resource Sharing Through GPU Virtualization on Accelerated High Performance Computing Systems

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 24/11/2015

Relevância na Pesquisa

35.69%

The High Performance Computing (HPC) field is witnessing a widespread
adoption of Graphics Processing Units (GPUs) as co-processors for conventional
homogeneous clusters. The adoption of prevalent Single- Program Multiple-Data
(SPMD) programming paradigm for GPU-based parallel processing brings in the
challenge of resource underutilization, with the asymmetrical
processor/co-processor distribution. In other words, under SPMD, balanced
CPU/GPU distribution is required to ensure full resource utilization. In this
paper, we propose a GPU resource virtualization approach to allow underutilized
microprocessors to effi- ciently share the GPUs. We propose an efficient GPU
sharing scenario achieved through GPU virtualization and analyze the
performance potentials through execution models. We further present the
implementation details of the virtualization infrastructure, followed by the
experimental analyses. The results demonstrate considerable performance gains
with GPU virtualization. Furthermore, the proposed solution enables full
utilization of asymmetrical resources, through efficient GPU sharing among
microprocessors, while incurring low overhead due to the added virtualization
layer.; Comment: 21 pages

Link permanente para citações:

## All-optical quantum computing with a hybrid solid-state processing unit

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

35.68%

We develop an architecture of hybrid quantum solid-state processing unit for
universal quantum computing. The architecture allows distant and nonidentical
solid-state qubits in distinct physical systems to interact and work
collaboratively. All the quantum computing procedures are controlled by optical
methods using classical fields and cavity QED. Our methods have prominent
advantage of the insensitivity to dissipation process benefiting from the
virtual excitation of subsystems. Moreover, the QND measurements and state
transfer for the solid-state qubits are proposed. The architecture opens
promising perspectives for implementing scalable quantum computation in a
broader sense that different solid-state systems can merge and be integrated
into one quantum processor afterwards.; Comment: 9 pages, 4 figures, supplement the demonstration for the efficiency
and practicability of the proposal, supplement figure 4, and amend some words
and phrases

Link permanente para citações:

## A Re-ranking Model for Dependency Parser with Recursive Convolutional Neural Network

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 21/05/2015

Relevância na Pesquisa

35.65%

#Computer Science - Computation and Language#Computer Science - Learning#Computer Science - Neural and Evolutionary Computing

In this work, we address the problem to model all the nodes (words or
phrases) in a dependency tree with the dense representations. We propose a
recursive convolutional neural network (RCNN) architecture to capture syntactic
and compositional-semantic representations of phrases and words in a dependency
tree. Different with the original recursive neural network, we introduce the
convolution and pooling layers, which can model a variety of compositions by
the feature maps and choose the most informative compositions by the pooling
layers. Based on RCNN, we use a discriminative model to re-rank a $k$-best list
of candidate dependency parsing trees. The experiments show that RCNN is very
effective to improve the state-of-the-art dependency parsing on both English
and Chinese datasets.

Link permanente para citações:

## Approximate Robotic Mapping from sonar data by modeling Perceptions with Antonyms

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 30/06/2010

Relevância na Pesquisa

45.55%

This work, inspired by the idea of "Computing with Words and Perceptions"
proposed by Zadeh in 2001, focuses on how to transform measurements into
perceptions for the problem of map building by Autonomous Mobile Robots. We
propose to model the perceptions obtained from sonar-sensors as two grid maps:
one for obstacles and another for empty spaces. The rules used to build and
integrate these maps are expressed by linguistic descriptions and modeled by
fuzzy rules. The main difference of this approach from other studies reported
in the literature is that the method presented here is based on the hypothesis
that the concepts "occupied" and "empty" are antonyms rather than complementary
(as it happens in probabilistic approaches), or independent (as it happens in
the previous fuzzy models).
Controlled experimentation with a real robot in three representative indoor
environments has been performed and the results presented. We offer a
qualitative and quantitative comparison of the estimated maps obtained by the
probabilistic approach, the previous fuzzy method and the new antonyms-based
fuzzy approach. It is shown that the maps obtained with the antonyms-based
approach are better defined, capture better the shape of the walls and of the
empty-spaces...

Link permanente para citações:

## Research On Mobile Cloud Computing: Review, Trend, And Perspectives

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 05/06/2012

Relevância na Pesquisa

35.79%

Mobile Cloud Computing (MCC) which combines mobile computing and cloud
computing, has become one of the industry buzz words and a major discussion
thread in the IT world since 2009. As MCC is still at the early stage of
development, it is necessary to grasp a thorough understanding of the
technology in order to point out the direction of future research. With the
latter aim, this paper presents a review on the background and principle of
MCC, characteristics, recent research work, and future research trends. A brief
account on the background of MCC: from mobile computing to cloud computing is
presented and then followed with a discussion on characteristics and recent
research work. It then analyses the features and infrastructure of mobile cloud
computing. The rest of the paper analyses the challenges of mobile cloud
computing, summary of some research projects related to this area, and points
out promising future research directions.; Comment: 8 pages, 7 figures, The Second International Conference on Digital
Information and Communication Technology and its Applications

Link permanente para citações:

## Retraction and Generalized Extension of Computing with Words

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

66.06%

Fuzzy automata, whose input alphabet is a set of numbers or symbols, are a
formal model of computing with values. Motivated by Zadeh's paradigm of
computing with words rather than numbers, Ying proposed a kind of fuzzy
automata, whose input alphabet consists of all fuzzy subsets of a set of
symbols, as a formal model of computing with all words. In this paper, we
introduce a somewhat general formal model of computing with (some special)
words. The new features of the model are that the input alphabet only comprises
some (not necessarily all) fuzzy subsets of a set of symbols and the fuzzy
transition function can be specified arbitrarily. By employing the methodology
of fuzzy control, we establish a retraction principle from computing with words
to computing with values for handling crisp inputs and a generalized extension
principle from computing with words to computing with all words for handling
fuzzy inputs. These principles show that computing with values and computing
with all words can be respectively implemented by computing with words. Some
algebraic properties of retractions and generalized extensions are addressed as
well.; Comment: 13 double column pages; 3 figures; to be published in the IEEE
Transactions on Fuzzy Systems

Link permanente para citações:

## Probabilistic Automata for Computing with Words

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 22/04/2006

Relevância na Pesquisa

66.05%

Usually, probabilistic automata and probabilistic grammars have crisp symbols
as inputs, which can be viewed as the formal models of computing with values.
In this paper, we first introduce probabilistic automata and probabilistic
grammars for computing with (some special) words in a probabilistic framework,
where the words are interpreted as probabilistic distributions or possibility
distributions over a set of crisp symbols. By probabilistic conditioning, we
then establish a retraction principle from computing with words to computing
with values for handling crisp inputs and a generalized extension principle
from computing with words to computing with all words for handling arbitrary
inputs. These principles show that computing with values and computing with all
words can be respectively implemented by computing with some special words. To
compare the transition probabilities of two near inputs, we also examine some
analytical properties of the transition probability functions of generalized
extensions. Moreover, the retractions and the generalized extensions are shown
to be equivalence-preserving. Finally, we clarify some relationships among the
retractions, the generalized extensions, and the extensions studied recently by
Qiu and Wang.; Comment: 35 pages; 3 figures

Link permanente para citações:

## Towards an Extension of the 2-tuple Linguistic Model to Deal With Unbalanced Linguistic Term sets

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 22/04/2013

Relevância na Pesquisa

45.65%

In the domain of Computing with words (CW), fuzzy linguistic approaches are
known to be relevant in many decision-making problems. Indeed, they allow us to
model the human reasoning in replacing words, assessments, preferences,
choices, wishes... by ad hoc variables, such as fuzzy sets or more
sophisticated variables.
This paper focuses on a particular model: Herrera & Martinez' 2-tuple
linguistic model and their approach to deal with unbalanced linguistic term
sets. It is interesting since the computations are accomplished without loss of
information while the results of the decision-making processes always refer to
the initial linguistic term set. They propose a fuzzy partition which
distributes data on the axis by using linguistic hierarchies to manage the
non-uniformity. However, the required input (especially the density around the
terms) taken by their fuzzy partition algorithm may be considered as too much
demanding in a real-world application, since density is not always easy to
determine. Moreover, in some limit cases (especially when two terms are very
closed semantically to each other), the partition doesn't comply with the data
themselves, it isn't close to the reality. Therefore we propose to modify the
required input...

Link permanente para citações:

## Recurrent Neural Network Method in Arabic Words Recognition System

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 20/01/2013

Relevância na Pesquisa

35.73%

The recognition of unconstrained handwriting continues to be a difficult task
for computers despite active research for several decades. This is because
handwritten text offers great challenges such as character and word
segmentation, character recognition, variation between handwriting styles,
different character size and no font constraints as well as the background
clarity. In this paper primarily discussed Online Handwriting Recognition
methods for Arabic words which being often used among then across the Middle
East and North Africa people. Because of the characteristic of the whole body
of the Arabic words, namely connectivity between the characters, thereby the
segmentation of An Arabic word is very difficult. We introduced a recurrent
neural network to online handwriting Arabic word recognition. The key
innovation is a recently produce recurrent neural networks objective function
known as connectionist temporal classification. The system consists of an
advanced recurrent neural network with an output layer designed for sequence
labeling, partially combined with a probabilistic language model. Experimental
results show that unconstrained Arabic words achieve recognition rates about
79%, which is significantly higher than the about 70% using a previously
developed hidden markov model based recognition system.; Comment: 6 Pages...

Link permanente para citações:

## A Systematic Mapping Study on Cloud Computing

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 19/08/2013

Relevância na Pesquisa

35.76%

Cloud Computing emerges from the global economic crisis as an option to use
computing resources from a more rational point of view. In other words, a
cheaper way to have IT resources. However, issues as security and privacy, SLA
(Service Layer Agreement), resource sharing, and billing has left open
questions about the real gains of that model. This study aims to investigate
state-of-the-art in Cloud Computing, identify gaps, challenges, synthesize
available evidences both its use and development, and provides relevant
information, clarifying open questions and common discussed issues about that
model through literature. The good practices of systematic map- ping study
methodology were adopted in order to reach those objectives. Al- though Cloud
Computing is based on a business model with over 50 years of existence,
evidences found in this study indicate that Cloud Computing still presents
limitations that prevent the full use of the proposal on-demand.

Link permanente para citações:

## Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

35.73%

#Computer Science - Neural and Evolutionary Computing#Computer Science - Computation and Language#Computer Science - Learning

An important class of problems involves training deep neural networks with
sparse prediction targets of very high dimension D. These occur naturally in
e.g. neural language models or the learning of word-embeddings, often posed as
predicting the probability of next words among a vocabulary of size D (e.g. 200
000). Computing the equally large, but typically non-sparse D-dimensional
output vector from a last hidden layer of reasonable dimension d (e.g. 500)
incurs a prohibitive O(Dd) computational cost for each example, as does
updating the D x d output weight matrix and computing the gradient needed for
backpropagation to previous layers. While efficient handling of large sparse
network inputs is trivial, the case of large sparse targets is not, and has
thus so far been sidestepped with approximate alternatives such as hierarchical
softmax or sampling-based approximations during training. In this work we
develop an original algorithmic approach which, for a family of loss functions
that includes squared error and spherical softmax, can compute the exact loss,
gradient update for the output weights, and gradient for backpropagation, all
in O(d^2) per example instead of O(Dd), remarkably without ever computing the
D-dimensional output. The proposed algorithm yields a speedup of D/4d ...

Link permanente para citações:

## Words are not Equal: Graded Weighting Model for building Composite Document Vectors

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 11/12/2015

Relevância na Pesquisa

35.69%

#Computer Science - Computation and Language#Computer Science - Learning#Computer Science - Neural and Evolutionary Computing

Despite the success of distributional semantics, composing phrases from word
vectors remains an important challenge. Several methods have been tried for
benchmark tasks such as sentiment classification, including word vector
averaging, matrix-vector approaches based on parsing, and on-the-fly learning
of paragraph vectors. Most models usually omit stop words from the composition.
Instead of such an yes-no decision, we consider several graded schemes where
words are weighted according to their discriminatory relevance with respect to
its use in the document (e.g., idf). Some of these methods (particularly
tf-idf) are seen to result in a significant improvement in performance over
prior state of the art. Further, combining such approaches into an ensemble
based on alternate classifiers such as the RNN model, results in an 1.6%
performance improvement on the standard IMDB movie review dataset, and a 7.01%
improvement on Amazon product reviews. Since these are language free models and
can be obtained in an unsupervised manner, they are of interest also for
under-resourced languages such as Hindi as well and many more languages. We
demonstrate the language free aspects by showing a gain of 12% for two review
datasets over earlier results...

Link permanente para citações:

## Topic words analysis based on LDA model

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 14/05/2014

Relevância na Pesquisa

35.85%

#Computer Science - Social and Information Networks#Computer Science - Distributed, Parallel, and Cluster Computing#Computer Science - Information Retrieval#Computer Science - Learning#Statistics - Machine Learning

Social network analysis (SNA), which is a research field describing and
modeling the social connection of a certain group of people, is popular among
network services. Our topic words analysis project is a SNA method to visualize
the topic words among emails from Obama.com to accounts registered in Columbus,
Ohio. Based on Latent Dirichlet Allocation (LDA) model, a popular topic model
of SNA, our project characterizes the preference of senders for target group of
receptors. Gibbs sampling is used to estimate topic and word distribution. Our
training and testing data are emails from the carbon-free server
Datagreening.com. We use parallel computing tool BashReduce for word processing
and generate related words under each latent topic to discovers typical
information of political news sending specially to local Columbus receptors.
Running on two instances using paralleling tool BashReduce, our project
contributes almost 30% speedup processing the raw contents, comparing with
processing contents on one instance locally. Also, the experimental result
shows that the LDA model applied in our project provides precision rate 53.96%
higher than TF-IDF model finding target words, on the condition that
appropriate size of topic words list is selected.

Link permanente para citações:

## Bilateral correspondence model for words-and-pictures association in multimedia-rich microblogs

Fonte: Association for Computing Machinary, Inc.
Publicador: Association for Computing Machinary, Inc.

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

35.78%

Nowadays, the amount of multimedia contents in microblogs is growing significantly. More than 20% of microblogs link to a picture or video in certain large systems. The rich semantics in microblogs provides an opportunity to endow images with higher-level semantics beyond object labels. However, this raises new challenges for understanding the association between multimodal multimedia contents in multimedia-rich microblogs. Disobeying the fundamental assumptions of traditional annotation, tagging, and retrieval systems, pictures and words in multimedia-rich microblogs are loosely associated and a correspondence between pictures and words cannot be established. To address the aforementioned challenges, we present the first study analyzing and modeling the associations between multimodal contents in microblog streams, aiming to discover multimodal topics from microblogs by establishing correspondences between pictures and words in microblogs. We first use a data-driven approach to analyze the new characteristics of the words, pictures, and their association types in microblogs. We then propose a novel generative model called the Bilateral Correspondence Latent Dirichlet Allocation (BC-LDA) model. Our BC-LDA model can assign flexible associations between pictures and words and is able to not only allow picture-word co-occurrence with bilateral directions...

Link permanente para citações: