Página 1 dos resultados de 5593 itens digitais encontrados em 0.013 segundos

Bridging the gap between cluster and grid computing

Alves, Albano; Pina, António
Fonte: Springer Publicador: Springer
Tipo: Artigo de Revista Científica
ENG; ENG
Relevância na Pesquisa
56.32%
The Internet computing model with its ubiquitous networking and computing infrastructure is driving a new class of interoperable applications that benefit both from high computing power and multiple Internet connections. In this context, grids are promising computing platforms that allow to aggregate distributed resources such as workstations and clusters to solve large-scale problems. However, because most parallel programming tools were primarily developed for MPP and cluster computing, to exploit the new environment higher abstraction and cooperative interfaces are required. Rocmeμ is a platform originally designed to support the operation of multi-SAN clusters that integrates application modeling and resource allocation. In this paper we show how the underlying resource oriented computation model provides the necessary abstractions to accommodate the migration from cluster to multicluster grid enabled computing.

Commodity cluster computing for computational chemistry

Hawick, K.; Grove, D.; Coddington, P.; Buntine, M.
Fonte: Internet Journal of Chemistry Publicador: Internet Journal of Chemistry
Tipo: Artigo de Revista Científica
Publicado em //2000 EN
Relevância na Pesquisa
66.37%
Access to high-performance computing power remains crucial for many computational chemistry problems. Unfortunately, traditional supercomputers or cluster computing solutions from commercial vendors remain very expensive, even for entry level configurations, and are therefore often beyond the reach of many small to medium-sized research groups and universities. Clusters of networked commodity computers provide an alternative computing platform that can offer substantially better price/performance than commercial supercomputers. We have constructed a networked PC cluster, or Beowulf, dedicated to computational chemistry problems using standard ab initio molecular orbital software packages such as Gaussian and GAMESS-US. This paper introduces the concept of Beowulf computing clusters and outlines the requirements for running the ab initio software packages used by computational chemists at the University of Adelaide. We describe the economic and performance trade-offs and design choices made in constructing the Beowulf system, including the choice of processors, networking, storage systems, operating system and job queuing software. Other issues such as throughput, scalability, software support, maintenance, and future trends are also discussed. We present some benchmark results for the Gaussian 98 and GAMESS-US programs...

A Flexible Multi-Dimensional QoS Performance Measure Framework for Distributed Heterogeneous Systems

Jong-Kook, Kim; Hensgen, Debra A.; Kidd, Taylor; Siegel, Howard Jay; Levin, Tim; Porter, N. Wayne; Freund, Richard F.; St. John, David; Irvine, Cynthia E.; Prasanna, Viktor K.
Fonte: Cluster Computing Publicador: Cluster Computing
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.21%
When users' tasks in a distributed heterogeneous computing environment (e.g.; cluster of heterogeneous computers) are allocated resources, the total demand placed on some system resources by the tasks, for a given interval of time, may exceed the availability of those resources. In such a case, some tasks may receive degraded service or be dropped from the system. One part of a measure to quantify the success of a resource management system (RMS) in such a distributed environment is the collective value of the tasks completed during an interval of time, as perceived by the user, application, or policy maker. The Flexible Integrated System Capability (FISC) measure presented here is a measure for quantifying this collective value. The FISC measure is a flexible multidimensional measure, and may include priorities, versions of a task or data, deadlines, situational mode, security, application- and domain-specific QoS, and task dependencies. For an environment where it is important to investigate how well data communication requests are satisfied, the data communication request satisfied can be the basis of the FISC measure instead of tasks completed.

Autonomic Cloud Computing: Research Perspective

Gill, Sukhpal Singh
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.25%
Cloud computing is an evolving utility computing mechanism in which cloud consumer can detect, choose and utilize the resources (infrastructure, software and platform) and provide service to user based on pay per use model as computing utilities. Current computing mechanism is effective, particular for medium and small cloud based companies, in which it permits easy and reliable access to cloud services like infrastructure, software and platform. Present cloud computing is almost similar to the existing models: cluster computing and grid computing. The important key technical features of cloud computing which includes autonomic service, rapid elasticity, end-to-end virtualization support, on-demand resource pooling and transparency in cloud billing. Further, non-technical features of cloud computing includes environment friendliness, little maintenance overhead, lower upfront costs, faster time to deployments, Service Level Agreement (SLA) and pay-as-you-go-model. In distributed computing environment, unpredictability of service is a fact, so same possible in cloud also. The success of next-generation Cloud Computing infrastructures will depend on how capably these infrastructures will discover and dynamically tolerate computing platforms...

A Survey of Current Trends in Distributed, Grid and Cloud Computing

Mittal, Gaurav; Kesswani, Dr. Nishtha; Goswami, Kuldeep
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 08/08/2013
Relevância na Pesquisa
56.33%
Through the 1990s to 2012 the internet changed the world of computing drastically. It started its journey with parallel computing after it advanced to distributed computing and further to grid computing. And in present scenario it creates a new world which is pronounced as a Cloud Computing [1]. These all three terms have different meanings. Cloud computing is based on backward computing schemes like cluster computing, distributed computing, grid computing and utility computing. The basic concept of cloud computing is virtualization. It provides virtual hardware and software resources to various requesting programs. This paper gives a detailed description about cluster computing, grid computing and cloud computing and gives an insight of some implementations of the same. We try to list the inspirations for the advent of all these technologies. We also account for some present scenario faults of grid computing and also discuss new cloud computing projects which are being managed by the Government of India for learning. The paper also reviews the existing work and covers (analytically), to some extent, some innovative ideas that can be implemented.; Comment: 6 pages

Design, Construction, and Use of a Single Board Computer Beowulf Cluster: Application of the Small-Footprint, Low-Cost, InSignal 5420 Octa Board

Cusick, James J.; Miller, William; Laurita, Nicholas; Pitt, Tasha
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.35%
In recent years development in the area of Single Board Computing has been advancing rapidly. At Wolters Kluwer's Corporate Legal Services Division a prototyping effort was undertaken to establish the utility of such devices for practical and general computing needs. This paper presents the background of this work, the design and construction of a 64 core 96 GHz cluster, and their possibility of yielding approximately 400 GFLOPs from a set of small footprint InSignal boards created for just over $2,300. Additionally this paper discusses the software environment on the cluster, the use of a standard Beowulf library and its operation, as well as other software application uses including Elastic Search and ownCloud. Finally, consideration will be given to the future use of such technologies in a business setting in order to introduce new Open Source technologies, reduce computing costs, and improve Time to Market. Index Terms: Single Board Computing, Raspberry Pi, InSignal Exynos 5420, Linaro Ubuntu Linux, High Performance Computing, Beowulf clustering, Open Source, MySQL, MongoDB, ownCloud, Computing Architectures, Parallel Computing, Cluster Computing; Comment: 9 Figures

Power-aware applications for scientific cluster and distributed computing

Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Grosso, Paola; Hillegas, Curtis; Holzman, Burt; Janssen, Ruben L.; Klous, Sander; Knight, Robert; Muzaffar, Shahzad
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
46.29%
The aggregate power use of computing hardware is an important cost factor in scientific cluster and distributed computing systems. The Worldwide LHC Computing Grid (WLCG) is a major example of such a distributed computing system, used primarily for high throughput computing (HTC) applications. It has a computing capacity and power consumption rivaling that of the largest supercomputers. The computing capacity required from this system is also expected to grow over the next decade. Optimizing the power utilization and cost of such systems is thus of great interest. A number of trends currently underway will provide new opportunities for power-aware optimizations. We discuss how power-aware software applications and scheduling might be used to reduce power consumption, both as autonomous entities and as part of a (globally) distributed system. As concrete examples of computing centers we provide information on the large HEP-focused Tier-1 at FNAL, and the Tigress High Performance Computing Center at Princeton University, which provides HPC resources in a university context.; Comment: Submitted to proceedings of International Symposium on Grids and Clouds (ISGC) 2014, 23-28 March 2014, Academia Sinica, Taipei, Taiwan

Phoenix Cloud: Consolidating Different Computing Loads on Shared Cluster System for Large Organization

Zhan, Jianfeng; Wang, Lei; Tu, Bibo; Li, Yong; Wang, Peng; Zhou, Wei; Meng, Dan
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
46.34%
Different departments of a large organization often run dedicated cluster systems for different computing loads, like HPC (high performance computing) jobs or Web service applications. In this paper, we have designed and implemented a cloud management system software Phoenix Cloud to consolidate heterogeneous workloads from different departments affiliated to the same organization on the shared cluster system. We have also proposed cooperative resource provisioning and management policies for a large organization and its affiliated departments, running HPC jobs and Web service applications, to share the consolidated cluster system. The experiments show that in comparison with the case that each department operates its dedicated cluster system, Phoenix Cloud significantly decreases the scale of the required cluster system for a large organization, improves the benefit of the scientific computing department, and at the same time provisions enough resources to the other department running Web services with varying loads.; Comment: 5 page, 8 figures, The First Workshop of Cloud Computing and its Application, The modified version. The original version is on the web site of http://www.cca08.org/, which is dated from August 13, 2008

Experimental Study of Remote Job Submission and Execution on LRM through Grid Computing Mechanisms

Prajapati, Harshadkumar B.; Shah, Vipul A.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 15/07/2014
Relevância na Pesquisa
56.29%
Remote job submission and execution is fundamental requirement of distributed computing done using Cluster computing. However, Cluster computing limits usage within a single organization. Grid computing environment can allow use of resources for remote job execution that are available in other organizations. This paper discusses concepts of batch-job execution using LRM and using Grid. The paper discusses two ways of preparing test Grid computing environment that we use for experimental testing of concepts. This paper presents experimental testing of remote job submission and execution mechanisms through LRM specific way and Grid computing ways. Moreover, the paper also discusses various problems faced while working with Grid computing environment and discusses their trouble-shootings. The understanding and experimental testing presented in this paper would become very useful to researchers who are new to the field of job management in Grid.; Comment: Fourth International Conference on Advanced Computing & Communication Technologies (ACCT), 2014

Cluster computing performances using virtual processors and mathematical software

Argentini, Gianluca
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 09/01/2004
Relevância na Pesquisa
66.2%
In this paper I describe some results on the use of virtual processors technology for parallelize some SPMD computational programs in a cluster environment. The tested technology is the INTEL Hyper Threading on real processors, and the programs are MATLAB 6.5 Release 13 scripts for floating points computation. By the use of this technology, I tested that a cluster can run with benefit a number of concurrent processes double the amount of physical processors. The conclusions of the work concern on the utility and limits of the used approach. The main result is that using virtual processors is a good technique for improving parallel programs not only for memory-based computations, but in the case of massive disk-storage operations too.; Comment: 9 pages; 1 figure; keywords: cluster computing, virtual processors, performances

Performance Analysis Cluster and GPU Computing Environment on Molecular Dynamic Simulation of BRV-1 and REM2 with GROMACS

Suhartanto, Heru; Yanuar, Arry; Wibisono, Ari
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 16/10/2012
Relevância na Pesquisa
56.34%
One of application that needs high performance computing resources is molecular d ynamic. There is some software available that perform molecular dynamic, one of these is a well known GROMACS. Our previous experiment simulating molecular dynamics of Indonesian grown herbal compounds show sufficient speed up on 32 n odes Cluster computing environment. In order to obtain a reliable simulation, one usually needs to run the experiment on the scale of hundred nodes. But this is expensive to develop and maintain. Since the invention of Graphical Processing Units that is also useful for general programming, many applications have been developed to run on this. This paper reports our experiments that evaluate the performance of GROMACS that runs on two different environment, Cluster computing resources and GPU based PCs. We run the experiment on BRV-1 and REM2 compounds. Four different GPUs are installed on the same type of PCs of quad cores; they are Gefore GTS 250, GTX 465, GTX 470 and Quadro 4000. We build a cluster of 16 nodes based on these four quad cores PCs. The preliminary experiment shows that those run on GTX 470 is the best among the other type of GPUs and as well as the cluster computing resource. A speed up around 11 and 12 is gained...

Introduction to Xgrid: Cluster Computing for Everyone

Breen, Barbara J.; Lindner, John F.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 21/07/2010
Relevância na Pesquisa
56.18%
Xgrid is the first distributed computing architecture built into a desktop operating system. It allows you to run a single job across multiple computers at once. All you need is at least one Macintosh computer running Mac OS X v10.4 or later. (Mac OS X Server is not required.) We provide explicit instructions and example code to get you started, including examples of how to distribute your computing jobs, even if your initial cluster consists of just two old laptops in your basement.

Enhanced Cluster Computing Performance Through Proportional Fairness

Bonald, Thomas; Roberts, James
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 08/04/2014
Relevância na Pesquisa
66.13%
The performance of cluster computing depends on how concurrent jobs share multiple data center resource types like CPU, RAM and disk storage. Recent research has discussed efficiency and fairness requirements and identified a number of desirable scheduling objectives including so-called dominant resource fairness (DRF). We argue here that proportional fairness (PF), long recognized as a desirable objective in sharing network bandwidth between ongoing flows, is preferable to DRF. The superiority of PF is manifest under the realistic modelling assumption that the population of jobs in progress is a stochastic process. In random traffic the strategy-proof property of DRF proves unimportant while PF is shown by analysis and simulation to offer a significantly better efficiency-fairness tradeoff.; Comment: Submitted to Performance 2014

Cloud Computing and Grid Computing 360-Degree Compared

Foster, Ian; Zhao, Yong; Raicu, Ioan; Lu, Shiyong
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 31/12/2008
Relevância na Pesquisa
56.23%
Cloud Computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for Cloud Computing and there seems to be no consensus on what a Cloud is. On the other hand, Cloud Computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established Grid Computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast Cloud Computing with Grid Computing from various angles and give insights into the essential characteristics of both.; Comment: IEEE Grid Computing Environments (GCE08) 2008

Cluster Computing: A High-Performance Contender

Baker, Mark; Buyya, Rajkumar; Hyde, Dan
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 22/09/2000
Relevância na Pesquisa
66.22%
When you first heard people speak of Piles of PCs, the first thing that came to mind may have been a cluttered computer room with processors, monitors, and snarls of cables all around. Collections of computers have undoubtedly become more sophisticated than in the early days of shared drives and modem connections. No matter what you call them, Clusters of Workstations (COW), Networks of Workstations (NOW), Workstation Clusters (WCs), Clusters of PCs (CoPs), clusters of computers are now filling the processing niche once occupied by more powerful stand-alone machines. This article discusses the need for cluster computing technology, Technologies, Components, and Applications, Supercluster Systems and Issues, The Need for a New Task Force, and Cluster Computing Educational Resources.

Cluster Computing White Paper

Baker, Mark
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
66.21%
Cluster computing is not a new area of computing. It is, however, evident that there is a growing interest in its usage in all areas where applications have traditionally used parallel or distributed computing platforms. The growing interest has been fuelled in part by the availability of powerful microprocessors and high-speed networks as off-the-shelf commodity components as well as in part by the rapidly maturing software components available to support high performance and high availability applications. This White Paper has been broken down into eleven sections, each of which has been put together by academics and industrial researchers who are both experts in their fields and where willing to volunteer their time and effort to put together this White Paper. The status of this paper is draft and we are at the stage of publicizing its presence and making a Request For Comments (RFC).; Comment: 119 page white paper - Edited by Mark Baker (University of Portsmouth), version 2.0

An Improved Multiple Faults Reassignment based Recovery in Cluster Computing

Bansal, Sanjay; Sharma, Sanjeev
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 13/02/2011
Relevância na Pesquisa
66.21%
In case of multiple node failures performance becomes very low as compare to single node failure. Failures of nodes in cluster computing can be tolerated by multiple fault tolerant computing. Existing recovery schemes are efficient for single fault but not with multiple faults. Recovery scheme proposed in this paper having two phases; sequentially phase, concurrent phase. In sequentially phase, loads of all working nodes are uniformly and evenly distributed by proposed dynamic rank based and load distribution algorithm. In concurrent phase, loads of all failure nodes as well as new job arrival are assigned equally to all available nodes by just finding the least loaded node among the several nodes by failure nodes job allocation algorithm. Sequential and concurrent executions of algorithms improve the performance as well better resource utilization. Dynamic rank based algorithm for load redistribution works as a sequential restoration algorithm and reassignment algorithm for distribution of failure nodes to least loaded computing nodes works as a concurrent recovery reassignment algorithm. Since load is evenly and uniformly distributed among all available working nodes with less number of iterations, low iterative time and communication overheads hence performance is improved. Dynamic ranking algorithm is low overhead...

Distributed mining of large scale remote sensing image archives on public computing infrastructures

Mascolo, Luigi; Quartulli, Marco; Guccione, Pietro; Nico, Giovanni; Olaizola, Igor G.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 17/01/2015
Relevância na Pesquisa
56.21%
Earth Observation (EO) mining aims at supporting efficient access and exploration of petabyte-scale space- and airborne remote sensing archives that are currently expanding at rates of terabytes per day. A significant challenge is performing the analysis required by envisaged applications --- like for instance process mapping for environmental risk management --- in reasonable time. In this work, we address the problem of content-based image retrieval via example-based queries from EO data archives. In particular, we focus on the analysis of polarimetric SAR data, for which target decomposition theorems have proved fundamental in discovering patterns in data and characterize the ground scattering properties. To this end, we propose an interactive region-oriented content-based image mining system in which 1) unsupervised ingestion processes are distributed onto virtual machines in elastic, on-demand computing infrastructures 2) archive-scale content hierarchical indexing is implemented in terms of a "big data" analytics cluster-computing framework 3) query processing amounts to traversing the generated binary tree index, computing distances that correspond to descriptor-based similarity measures between image groups and a query image tile. We describe in depth both the strategies and the actual implementations for the ingestion and indexing components...

Coscheduling techniques for non-dedicated cluster computing

Solsona Tehàs, Francesc
Fonte: Bellaterra : Universitat Autònoma de Barcelona, Publicador: Bellaterra : Universitat Autònoma de Barcelona,
Tipo: Tesis i dissertacions electròniques; info:eu-repo/semantics/doctoralThesis Formato: application/pdf; application/pdf; application/pdf
Publicado em //2003 ENG; ENG
Relevância na Pesquisa
56.18%
Consultable des del TDX; Títol obtingut de la portada digitalitzada; Los esfuerzos de esta tesis se centran en onstruir una máquina virtual sobre un sistema Cluster que proporcione la doble funcionalidad de ejecutar eficientemente tanto trabajos tradicionales (o locales) de estaciones de trabajo así como aplicaciones distribuidas. Para solucionar el problema, deben tenerse en cuenta dos importantes consideraciones: * Como compartir y planificar los recursos de las diferentes estaciones de trabajo (especialmente la CPU) entre las aplicaciones locales y distribuidas. * Como gestionar y controlar la totalidad del sistema para conseguir ejecuciones eficientes de ambos tipos de aplicaciones. Coscheduling es el principio básico usado para compartir y planificar la CPU. Cosche-duling se basa en la reducción del tiempo de espera de comunicación de aplicaciones distribuidas, planificando simultáneamente todas (o un subconjunto de) las tareas que la componen. Por lo tanto, mediante el uso de técnicas de coscheduling, únicamente se puede incrementar el rendimiento de aplicaciones distribuidas con comunicación remota entre las tareas que la componen. Las técnicas de Coscheduling se clasifican en dos grandes grupos: control-explícito y control-implícito. Esta clasificación se basa en la forma de coplanificar las tareas distribuidas. En control-explícito...

Cluster computing and the power of edge recognition

Hemaspaandra, Lane; Homan, Christopher; Kosub, Sven
Fonte: Elsevier - Information and Computation Publicador: Elsevier - Information and Computation
Tipo: Artigo de Revista Científica
EN_US
Relevância na Pesquisa
56.16%
Though complexity theory already extensively studies path-cardinality-based restrictions on the power of nondeterminism, this paper is motivated by a more recent goal: To gain insight into how much of a restriction, it is of nondeterminism to limit machines to have just one contiguous (with respect to some simple order) interval of accepting paths. In particular, we study the robustness—the invariance under definition changes—of the cluster class CL#P. This class contains each #P function that is computed by a balanced Turing machine whose accepting paths always form a cluster with respect to some length-respecting total order with efficient adjacency checks. The definition of CL#P is heavily influenced by the defining paper’s focus on (global) orders. In contrast, we define a cluster class, CLU#P, to capture what seems to us a more natural model of cluster computing. We prove that the naturalness is costless: CL#P=CLU#P. Then we exploit the more natural, flexible features of CLU#P to prove new robustness results for CL#P and to expand what is known about the closure properties of CL#P. The complexity of recognizing edges—of an ordered collection of computation paths or of a cluster of accepting computation paths—is central to this study. Most particularly...