Página 1 dos resultados de 3779 itens digitais encontrados em 0.013 segundos

- JOHN WILEY & SONS LTD
- Biblioteca Digitais de Teses e Dissertações da USP
- Faculdade de Ciências e Tecnologia
- Universidade Nacional da Austrália
- Universidade de Tubinga
- Universidade Rice
- Universidade Federal do Rio Grande do Sul
- Springer
- Universidade Cornell
- Slovene Society Informatika
- Elsevier
- Mais Publicadores...

## Performance results of running parallel applications on the InteGrade

Fonte: JOHN WILEY & SONS LTD
Publicador: JOHN WILEY & SONS LTD

Tipo: Artigo de Revista Científica

ENG

Relevância na Pesquisa

56.21%

#Grid computing#middleware#parallel algorithms#chain matrix product#0-1 Knapsack Problem#local sequence alignment problem#ALGORITHMS#COMPUTATION#SEQUENCE#Computer Science, Software Engineering#Computer Science, Theory & Methods

The InteGrade middleware intends to exploit the idle time of computing resources in computer laboratories. In this work we investigate the performance of running parallel applications with communication among processors on the InteGrade grid. As costly communication on a grid can be prohibitive, we explore the so-called systolic or wavefront paradigm to design the parallel algorithms in which no global communication is used. To evaluate the InteGrade middleware we considered three parallel algorithms that solve the matrix chain product problem, the 0-1 Knapsack Problem, and the local sequence alignment problem, respectively. We show that these three applications running under the InteGrade middleware and MPI take slightly more time than the same applications running on a cluster with only LAM-MPI support. The results can be considered promising and the time difference between the two is not substantial. The overhead of the InteGrade middleware is acceptable, in view of the benefits obtained to facilitate the use of grid computing by the user. These benefits include job submission, checkpointing, security, job migration, etc. Copyright (C) 2009 John Wiley & Sons, Ltd.; Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP); FAPESP[2004/08928-3]; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq[55.0895/07-8]; CNPq[30.5362/06-2]; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq[30.2942/04-1]; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq[62.0123/04-4]; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq[48.5460/06-8]; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq[62.0171/06-5]; FUNDECT[41/100.115/2006]; FUNDECT

Link permanente para citações:

## Desenvolvimento de modelos e algoritmos sequenciais e paralelos para o planejamento da expansão de sistemas de transmissão de energia elétrica; Development of mathematical models, sequential and parallel algorithms for transmission expansion planning

Fonte: Biblioteca Digitais de Teses e Dissertações da USP
Publicador: Biblioteca Digitais de Teses e Dissertações da USP

Tipo: Tese de Doutorado
Formato: application/pdf

Publicado em 16/03/2012
PT

Relevância na Pesquisa

56.28%

#Algoritmos multiobjetivos#Evolutionary algorithms#Metaheurísticas#Metaheuristics#Multiple generation dispatch scenarios#Múltiplos cenários de geração#Optimization#Otimização#Parallel programming#Planejamento de sistemas de transmissão#Programação paralela

O principal objetivo deste estudo é propor uma nova metodologia para lidar com o problema de Planejamento da Expansão de Redes de Transmissão de Energia Elétrica com Múltiplos Cenários de Geração (PERTEEG). Com a metodologia proposta neste trabalho almeja-se construir planos de expansão de redes de transmissão de energia elétrica que sejam capazes de, no menor custo de investimento possível, satisfazer às novas exigências dos sistemas elétricos modernos, tais como construção de redes de transmissão livres de congestionamento e robustas à incerteza em relação aos cenários de geração futuros. Através de estudos realizados na literatura do problema, verificou-se que novos modelos e metodologias de abordagem do PERTEEG se fazem necessários. Ao se modelar o PERTEEG visando construir redes de transmissão que contornem as incertezas em relação aos cenários de geração futuros e concomitantemente minimizar o custo de investimento para a expansão do sistema, o planejador se depara com um problema de otimização multiobjetivo. Existem na literatura da pesquisa operacional diversos algoritmos que visam lidar com problemas multiobjetivos. Nesta tese, foram aplicados dois desses algoritmos: Nondominated Sorting Genetic Algorithms-II (NSGA-II) e SPEA2: Strength Pareto Evolutionary Algorithm (SPEA2). Em primeira análise...

Link permanente para citações:

## Parallel programming in biomedical signal processing

Fonte: Faculdade de Ciências e Tecnologia
Publicador: Faculdade de Ciências e Tecnologia

Tipo: Dissertação de Mestrado

Publicado em //2012
ENG

Relevância na Pesquisa

56.27%

#Parallel processing#Parallel algorithms#Biosignals#Signal processing#Heart rate variability#Respiration

Dissertação para obtenção do Grau de Mestre em
Engenharia Biomédica; Patients with neuromuscular and cardiorespiratory diseases need to be monitored continuously. This constant monitoring gives rise to huge amounts of multivariate data which need to be processed as soon as possible, so that their most relevant features can be extracted.
The field of parallel processing, an area from the computational sciences, comes naturally as a way to provide an answer to this problem. For the parallel processing to succeed it is necessary to adapt the pre-existing signal processing algorithms to the modern architectures of computer systems with several processing units.
In this work parallel processing techniques are applied to biosignals, connecting the
area of computer science to the biomedical domain. Several considerations are made
on how to design parallel algorithms for signal processing, following the data parallel paradigm. The emphasis is given to algorithm design, rather than the computing
systems that execute these algorithms. Nonetheless, shared memory systems and distributed memory systems are mentioned in the present work.
Two signal processing tools integrating some of the parallel programming concepts
mentioned throughout this work were developed. These tools allow a fast and efficient analysis of long-term biosignals. The two kinds of analysis are focused on heart rate variability and breath frequency...

Link permanente para citações:

## Designing Efficient Parallel Algorithms for Graph Problems

Fonte: Universidade Nacional da Austrália
Publicador: Universidade Nacional da Austrália

Tipo: Thesis (PhD); Doctor of Philosophy (PhD)

EN

Relevância na Pesquisa

66.26%

#efficient parallel algorithms#graph problems#graph algorithms#paralellization#dynamic updates#sparsification#communication networks#Parallel Random Access Machine#PRAM#meshes#hypercubes

Graph algorithms are concerned with the algorithmic aspects of solving graph problems. The problems are motivated from and have application to diverse areas of computer science, engineering and other disciplines. Problems arising from these areas of application are good candidates for parallelization since they often have both intense computational needs and stringent response time requirements. Motivated by these concerns, this thesis investigates parallel algorithms for these kinds of graph problems that have at least one of the following properties: the problems involve some type of dynamic updates; the sparsification technique is applicable; or the problems are closely related to communications network issues. The models of parallel computation used in our studies are the Parallel Random Access Machine (PRAM) model and the practical interconnection network models such as meshes and hypercubes. ¶ ...; yes

Link permanente para citações:

## Designing Parallel Algorithms for SMP Clusters; Entwurf von Parallelen Algorithmen für SMP Clusters

Fonte: Universidade de Tubinga
Publicador: Universidade de Tubinga

Tipo: Dissertação

EN

Relevância na Pesquisa

66.28%

#Programmanalyse , Parallelrechner , Cluster #004#Parallele Algorithmen , SMP Clusters , Kostenmodelle#Parallel Algorithms , SMP Clusters , Cost Models

In the following thesis, we observe methods for designing and optimizing parallel algorithms for SMP clusters. This particular architecture for parallel computers combines two different concepts. SMP cluster consist of computing nodes that are shared-memory systems, because the processors have access to common resources and especially to the local memory system. Hence, the processors within the same node are capable to communicate and synchronize using the shared-memory. An interconnection network connects the nodes. Communication and synchronization of processors from different nodes is done over this
network and thus, correspond to a distributed memory system. In the first place, this organization leads to a parallel hierarchy, because parallelism is involved within and between the nodes. Secondly, a hierarchy is created concerning communication. In general, communication within a node is faster than communication between the nodes due to the use of shared-memory. Therefore, there are at least two levels of hierarchy. Due to modern trends like hierarchical interconnection structures, Metacomputing technology, where several parallel machines are connected, or Grid computing technology that use the Internet to
unify distributed computing resources in the whole world...

Link permanente para citações:

## Parallel algorithms and architectures for subspace based channel estimation for CDMA communication systems

Fonte: Universidade Rice
Publicador: Universidade Rice

Tipo: Conference paper

ENG

Relevância na Pesquisa

66.26%

Conference Paper; This paper presents an overview and results from an ongoing research project to study parallel algorithms for the acquisition of Code Division Multiple Access (CDMA) communication signals. The goal of this research isto evaluate a class of related algorithms and architectures for the acquisition problem and map them onto parallel architectures containing DSPs. The algorithms used are generally termed subspace-based algorithms, since they involve computation of subspaces of the vector space spanned by certain observation vectors. This paper presents results from some preliminary implementations of such a subspace-based algorithm on the Texas Instruments TMS320C40 Parallel Processing Development System.

Link permanente para citações:

## Parallel algorithms in linear algebra

Fonte: Universidade Nacional da Austrália
Publicador: Universidade Nacional da Austrália

Tipo: Working/Technical Paper
Formato: 166298 bytes; 356 bytes; application/pdf; application/octet-stream

EN_AU

Relevância na Pesquisa

56.18%

#MIMD machines#Gaussian elimination#orthogonal factorization#singular value decomposition#Amdahl's Law#parallel architectures#data movement#data distribution#linear systems#symmetric eigenvalue problems#Hestenes method

This paper provides an introduction to algorithms for fundamental linear algebra problems on various parallel computer architectures, with the emphasis on distributed-memory MIMD machines. To illustrate the basic concepts and key issues, we consider the problem of parallel solution of a nonsingular linear system by Gaussian elimination with partial pivoting. This problem has come to be regarded as a benchmark for the performance of parallel machines. We consider its appropriateness as a benchmark, its communication requirements, and schemes for data distribution to facilitate communication and load balancing. In addition, we describe some parallel algorithms for orthogonal (QR) factorization and the singular value decomposition (SVD).; no

Link permanente para citações:

## Algoritmos paralelos para alocação e gerência de processadores em máquinas multiprocessadoras hipercúbicas; Parallel algorithms for processor allocation in hypercubes

Fonte: Universidade Federal do Rio Grande do Sul
Publicador: Universidade Federal do Rio Grande do Sul

Tipo: Dissertação
Formato: application/pdf

POR

Relevância na Pesquisa

56.39%

#Computer architecture#Arquitetura de computadores#Processamento paralelo#Parallel processing#Algoritmos paralelos#Processor allocation#Parallel algorithms#Hipercubo#Alocacao : Processadores#Hypercubes

Nos últimos anos, máquinas maciçamente paralelas, compostas de centenas de processadores, vem sendo estudadas como uma alternativa para a construção de supercomputadores. Neste novo conceito de processamento de dados, grandes velocidades são alcançadas através da cooperação entre os diversos elementos processadores na resolução de um problema. Grande parte das máquinas maciçamente paralelas encontradas no mercado utilizam-se da topologia hipercúbica para a interconexão de seus múltiplos processadores, ou podem ser configuradas como tal. Uma alternativa interessante para o compartilhamento da capacidade de processamento destas máquinas é sua utilização como computador agregado a uma rede, servindo a diversos usuários [DUT 91]. Desta forma, a máquina hipercúbica se comporta como um banco de processadores, que permite que cada usuário aloque parte de seus processadores para seu uso pessoal. Isto resulta em um aumento no desempenho da rede ao nível de supercomputadores com um custo relativamente baixo e viabiliza a construção de máquinas hipercúbicas com altas dimensões, evitando que estas sejam sub-utilizadas. Neste tipo de contexto, cabe ao sistema operacional atender as requisições dos usuários do hipercubo compartilhado de forma eficiente...

Link permanente para citações:

## Efficient parallel algorithms for elastic-plastic finite element analysis

Fonte: Springer
Publicador: Springer

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

56.18%

#Keywords: Finite element method#Gradient methods#Interfaces (materials)#Parallel algorithms#Problem solving#Stress-strain curves#Efficient parallel algorithms#Elastic-plastic problems#Interface conditions#Substructures#Elastoplasticity

This paper presents our new development of parallel finite element algorithms for elastic-plastic problems. The proposed method is based on dividing the original structure under consideration into a number of substructures which are treated as isolated fi

Link permanente para citações:

## Simulating Parallel Algorithms in the MapReduce Framework with Applications to Parallel Computational Geometry

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 27/04/2010

Relevância na Pesquisa

56.24%

#Computer Science - Data Structures and Algorithms#Computer Science - Computational Geometry#Computer Science - Distributed, Parallel, and Cluster Computing

In this paper, we describe efficient MapReduce simulations of parallel
algorithms specified in the BSP and PRAM models. We also provide some
applications of these simulation results to problems in parallel computational
geometry for the MapReduce framework, which result in efficient MapReduce
algorithms for sorting, 1-dimensional all nearest-neighbors, 2-dimensional
convex hulls, 3-dimensional convex hulls, and fixed-dimensional linear
programming. For the case when reducers can have a buffer size of
$B=O(n^\epsilon)$, for a small constant $\epsilon>0$, all of our MapReduce
algorithms for these applications run in a constant number of rounds and have a
linear-sized message complexity, with high probability, while guaranteeing with
high probability that all reducer lists are of size $O(B)$.; Comment: Version of paper appearing in MASSIVE 2010

Link permanente para citações:

## Improved bounds and parallel algorithms for the Lovasz Local Lemma

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

56.21%

#Computer Science - Discrete Mathematics#Computer Science - Data Structures and Algorithms#Mathematics - Combinatorics

The Lovasz Local Lemma (LLL) is a cornerstone principle in the probabilistic
method of combinatorics, and a seminal algorithm of Moser & Tardos (2010)
provided an efficient randomized algorithm to implement it. This algorithm
could be parallelized to give an algorithm that uses polynomially many
processors and $O(\log^3 n)$ time, stemming from $O(\log n)$ adaptive
computations of a maximal independent set (MIS). Chung et. al. (2014) developed
faster local and parallel algorithms, potentially running in time $O(\log^2
n)$, but these algorithms work under significantly more stringent conditions
than the LLL.
We give a new parallel algorithm, that works under essentially the same
conditions as the original algorithm of Moser & Tardos, but uses only a single
MIS computation, thus running in $O(\log^2 n)$ time. This conceptually new
algorithm also gives a clean combinatorial description of a satisfying
assignment which might be of independent interest. Our techniques extend to the
deterministic LLL algorithm given by Chandrasekaran et al (2013) leading to an
NC-algorithm running in time $O(\log^2 n)$ as well.
We also provide improved bounds on the run-times of the sequential and
parallel resampling-based algorithms originally developed by Moser & Tardos.
These bounds extend to any problem instance in which the tighter Shearer LLL
criterion is satisfied. We also improve on the analysis of Kolipaka & Szegedy
(2011) to give tighter concentration results. Interestingly...

Link permanente para citações:

## Parallel Algorithms for Counting Triangles in Networks with Large Degrees

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 22/06/2014

Relevância na Pesquisa

56.24%

#Computer Science - Distributed, Parallel, and Cluster Computing#Computer Science - Data Structures and Algorithms#Computer Science - Social and Information Networks#D.1.3#G.2.2#H.2.8

Finding the number of triangles in a network is an important problem in the
analysis of complex networks. The number of triangles also has important
applications in data mining. Existing distributed memory parallel algorithms
for counting triangles are either Map-Reduce based or message passing interface
(MPI) based and work with overlapping partitions of the given network. These
algorithms are designed for very sparse networks and do not work well when the
degrees of the nodes are relatively larger. For networks with larger degrees,
Map-Reduce based algorithm generates prohibitively large intermediate data, and
in MPI based algorithms with overlapping partitions, each partition can grow as
large as the original network, wiping out the benefit of partitioning the
network.
In this paper, we present two efficient MPI-based parallel algorithms for
counting triangles in massive networks with large degrees. The first algorithm
is a space-efficient algorithm for networks that do not fit in the main memory
of a single compute node. This algorithm divides the network into
non-overlapping partitions. The second algorithm is for the case where the main
memory of each node is large enough to contain the entire network. We observe
that for such a case...

Link permanente para citações:

## Parallel algorithms in linear algebra

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 29/04/2010

Relevância na Pesquisa

56.21%

#Computer Science - Data Structures and Algorithms#Computer Science - Numerical Analysis#Mathematics - Numerical Analysis#65-01 (Primary) 65-02, 65F05, 65F15, 68-01, 68-02 (Secondary)#C.1.2#C.1.4#D.1.3#G.1.0#G.4

This report provides an introduction to algorithms for fundamental linear
algebra problems on various parallel computer architectures, with the emphasis
on distributed-memory MIMD machines. To illustrate the basic concepts and key
issues, we consider the problem of parallel solution of a nonsingular linear
system by Gaussian elimination with partial pivoting. This problem has come to
be regarded as a benchmark for the performance of parallel machines. We
consider its appropriateness as a benchmark, its communication requirements,
and schemes for data distribution to facilitate communication and load
balancing. In addition, we describe some parallel algorithms for orthogonal
(QR) factorization and the singular value decomposition (SVD).; Comment: 17 pages. An old Technical Report, submitted for archival purposes.
For further details see http://wwwmaths.anu.edu.au/~brent/pub/pub128.html

Link permanente para citações:

## Distributed and Parallel Algorithms for Set Cover Problems with Small Neighborhood Covers

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 27/12/2013

Relevância na Pesquisa

56.18%

#Computer Science - Data Structures and Algorithms#Computer Science - Distributed, Parallel, and Cluster Computing#F.2.2

In this paper, we study a class of set cover problems that satisfy a special
property which we call the {\em small neighborhood cover} property. This class
encompasses several well-studied problems including vertex cover, interval
cover, bag interval cover and tree cover. We design unified distributed and
parallel algorithms that can handle any set cover problem falling under the
above framework and yield constant factor approximations. These algorithms run
in polylogarithmic communication rounds in the distributed setting and are in
NC, in the parallel setting.; Comment: Full version of FSTTCS'13 paper

Link permanente para citações:

## Design and implementation of self-adaptable parallel algorithms for scientific computing on highly heterogeneous HPC platforms

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 14/09/2011

Relevância na Pesquisa

56.3%

Traditional heterogeneous parallel algorithms, designed for heterogeneous
clusters of workstations, are based on the assumption that the absolute speed
of the processors does not depend on the size of the computational task. This
assumption proved inaccurate for modern and perspective highly heterogeneous
HPC platforms. New class of algorithms based on the functional performance
model (FPM), representing the speed of the processor by a function of problem
size, has been recently proposed. These algorithms cannot be however employed
in self-adaptable applications because of very high cost of construction of the
functional performance model. The paper presents a new class of parallel
algorithms for highly heterogeneous HPC platforms. Like traditional FPM-based
algorithms, these algorithms assume that the speed of the processors is
characterized by speed functions rather than speed constants. Unlike the
traditional algorithms, they do not assume the speed functions to be given.
Instead, they estimate the speed functions of the processors for different
problem sizes during their execution. These algorithms do not construct the
full speed function for each processor but rather build and use their partial
estimates sufficient for optimal distribution of computations with a given
accuracy. The low execution cost of distribution of computations between
heterogeneous processors in these algorithms make them suitable for employment
in self-adaptable applications. Experiments with parallel matrix multiplication
applications based on this approach are performed on local and global
heterogeneous computational clusters. The results show that the execution time
of optimal matrix distribution between processors is significantly less...

Link permanente para citações:

## Improved Parallel Algorithms for Spanners and Hopsets

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

56.21%

We use exponential start time clustering to design faster and more
work-efficient parallel graph algorithms involving distances. Previous
algorithms usually rely on graph decomposition routines with strict
restrictions on the diameters of the decomposed pieces. We weaken these bounds
in favor of stronger local probabilistic guarantees. This allows more direct
analyses of the overall process, giving: * Linear work parallel algorithms that
construct spanners with $O(k)$ stretch and size $O(n^{1+1/k})$ in unweighted
graphs, and size $O(n^{1+1/k} \log k)$ in weighted graphs. * Hopsets that lead
to the first parallel algorithm for approximating shortest paths in undirected
graphs with $O(m\;\mathrm{polylog}\;n)$ work.

Link permanente para citações:

## Practical Parallel External Memory Algorithms via Simulation of Parallel Algorithms

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 19/01/2010

Relevância na Pesquisa

56.33%

#Computer Science - Distributed, Parallel, and Cluster Computing#Computer Science - Data Structures and Algorithms

This thesis introduces PEMS2, an improvement to PEMS (Parallel External
Memory System). PEMS executes Bulk-Synchronous Parallel (BSP) algorithms in an
External Memory (EM) context, enabling computation with very large data sets
which exceed the size of main memory. Many parallel algorithms have been
designed and implemented for Bulk-Synchronous Parallel models of computation.
Such algorithms generally assume that the entire data set is stored in main
memory at once. PEMS overcomes this limitation without requiring any
modification to the algorithm by using disk space as memory for additional
"virtual processors". Previous work has shown this to be a promising approach
which scales well as computational resources (i.e. processors and disks) are
added. However, the technique incurs significant overhead when compared with
purpose-built EM algorithms. PEMS2 introduces refinements to the simulation
process intended to reduce this overhead as well as the amount of disk space
required to run the simulation. New functionality is also introduced, including
asynchronous I/O and support for multi-core processors. Experimental results
show that these changes significantly improve the runtime of the simulation.
PEMS2 narrows the performance gap between simulated BSP algorithms and their
hand-crafted EM counterparts...

Link permanente para citações:

## Computing Multidimensional Aggregates in Parallel

Fonte: Slovene Society Informatika
Publicador: Slovene Society Informatika

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

56.26%

#Keywords: Data structures#Database systems#Decision support systems#Graph theory#Magnetic disk storage#Multiprocessing systems#Parallel algorithms#Aggregation computation#Data cube#Data warehousing#Online analytical processing application

Computing multiple related group-bys and aggregates is one of the core operations of On-Line Analytical Processing (OLAP) applications. This kind of computation involves a huge volume of data operations (megabytes or treabytes). The response time for such applications is crucial, so, using parallel processing techniques to handle such computation is inevitable. In this paper we present several parallel algorithms for computing a collection of group-by aggregations based on a multiprocessor system with sharing disks. We focus on a special case of the aggregation problem-'Cube' operator which computes group-by aggregations over all possible combinations of a list of attributes. The proposed algorithms introduce a novel processor scheduling policy and a non-trivial decomposition approach for the problem in the parallel environment. Particularly, we believe the proposed hybrid algorithm has the best performance potential among the four proposed algorithms. All the proposed algorithms are scalable.

Link permanente para citações:

## Scalable parallel algorithms for surface fitting and data mining

Fonte: Elsevier
Publicador: Elsevier

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

66.21%

#Keywords: Computational complexity#Computational geometry#Curve fitting#Data mining#Data structures#Finite element method#Mathematical models#Parallel algorithms#Problem solving#Storage allocation (computer)#Wavelet transforms

This paper presents scalable parallel algorithms for high-dimensional surface fitting and predictive modelling which are used in data mining applications. These algorithms are based on techniques like finite elements, thin plate splines, wavelets and additive models. They all consist of two steps: First, data is read from secondary storage and a linear system is assembled. Secondly, the linear system is solved. The assembly can be done with almost no communication and the size of the linear system is independent of the data size. Thus the presented algorithms are both scalable with the data size and the number of processors.

Link permanente para citações:

## Very fast parallel algorithms for approximate edge coloring

Fonte: Elsevier
Publicador: Elsevier

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

66.17%

This paper presents very fast parallel algorithms for approximate edge coloring. Let log(1)n=logn,log(k)n=log(log(k-1)n), and log*(n)=min{k|log(k)n<1}. It is shown that a graph with n vertices and m edges can be edge colored with (2⌈log1/4log*(n)⌉) c�

Link permanente para citações: