Página 1 dos resultados de 10490 itens digitais encontrados em 0.020 segundos

Complexidade computacional e o problema P vs NP; Computational complexity and the P vs NP problem

Igor Carboni Oliveira
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 02/08/2010 PT
Relevância na Pesquisa
66.31%
A teoria de complexidade computacional procura estabelecer limites para a eficiência dos algoritmos, investigando a dificuldade inerente dos problemas computacionais. O problema P vs NP é uma questão central em complexidade computacional. Informalmente, ele procura determinar se, para uma classe importante de problemas computacionais, a busca exaustiva por soluções é essencialmente a melhor alternativa algorítmica possível. Esta dissertação oferece tanto uma introdução clássica ao tema, quanto uma exposição a diversos teoremas mais avançados, resultados recentes e problemas em aberto. Em particular, o método da diagonalização é discutido em profundidade. Os principais resultados obtidos por diagonalização são os teoremas de hierarquia de tempo e de espaço (Hartmanis e Stearns [54, 104]). Apresentamos uma generalização desses resultados, obtendo como corolários os teoremas clássicos provados por Hartmanis e Stearns. Essa é a primeira vez que uma prova unificada desses resultados aparece na literatura; Computational complexity theory is the field of theoretical computer science that aims to establish limits on the efficiency of algorithms. The main open question in computational complexity is the P vs NP problem. Intuitively...

Existential Theorems in Computational Complexity Theory: Size and Robustness

Zimand, Marius ; Hemaspaandra, Lane A.
Fonte: University of Rochester. Computer Science Department. Publicador: University of Rochester. Computer Science Department.
Tipo: Thesis; Technical Report
ENG
Relevância na Pesquisa
66.29%
Thesis (Ph. D.)--University of Rochester. Dept. of Computer Science, 1996. Simultaneously published in the Technical Report series.; How strong are the results in computational complexity that assert, under certain hypotheses, the existence of an object? Are there many such objects, or are there few? To what extent can we relax the hypotheses and still maintain the same conclusions? These are the types of questions that are studied in this thesis. More precisely, we investigate some of the central existential results in computational complexity from the point of view of size and robustness. Below is a sample of the results in the thesis. We show that for any effective enumeration of computational devices that cover the whole set of computable functions and for any complexity measure satisfying a single axiom, neither the set of speedable functions nor the set of functions that generate complexity gaps is small from a topological point of view. We show that, with probability one on the set of oracles, there is a set in NP^A that asymptotically splits in half any infinite set in P^A. This is the strongest currently known relativized separation of NP from P. We also show that most (in the resource-bounded measure sense) sets that are computable in exponential time do not have even very weak membership-related properties that are computable in polynomial time. We prove that in almost all relativized worlds...

Computational Complexity Analysis of Genetic Programming - Initial Results and Future Directions

Neumann, F.; O'Reilly, U.M.; Wagner, M.
Fonte: Springer; United States Publicador: Springer; United States
Tipo: Parte de Livro
Publicado em //2011 EN
Relevância na Pesquisa
66.23%
The computational complexity analysis of evolutionary algorithmsworking on binary strings has significantly increased the rigorous understanding on how these types of algorithm work. Similar results on the computational complexity of genetic programming would fill an important theoretic gap. They would significantly increase the theoretical understanding on how and why genetic programming algorithms work and indicate, in a rigorous manner, how design choices of algorithm components impact its success. We summarize initial computational complexity results for simple tree-based genetic programming and point out directions for future research.; Frank Neumann, Una-May O’Reilly and Markus Wagner; Genetic and Evolutionary Computation Series

Computational Complexity of the Minimum Cost Homomorphism Problem on Three-Element Domains

Uppman, Hannes
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
66.21%
In this paper we study the computational complexity of the (extended) minimum cost homomorphism problem (Min-Cost-Hom) as a function of a constraint language, i.e. a set of constraint relations and cost functions that are allowed to appear in instances. A wide range of natural combinatorial optimisation problems can be expressed as Min-Cost-Homs and a classification of their complexity would be highly desirable, both from a direct, applied point of view as well as from a theoretical perspective. Min-Cost-Hom can be understood either as a flexible optimisation version of the constraint satisfaction problem (CSP) or a restriction of the (general-valued) valued constraint satisfaction problem (VCSP). Other optimisation versions of CSPs such as the minimum solution problem (Min-Sol) and the minimum ones problem (Min-Ones) are special cases of Min-Cost-Hom. The study of VCSPs has recently seen remarkable progress. A complete classification for the complexity of finite-valued languages on arbitrary finite domains has been obtained Thapper and Zivny [STOC'13]. However, understanding the complexity of languages that are not finite-valued appears to be more difficult. Min-Cost-Hom allows us to study problematic languages of this type without having to deal with with the full generality of the VCSP. A recent classification for the complexity of three-element Min-Sol...

Is Computational Complexity a Barrier to Manipulation?

Walsh, Toby
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 05/07/2010
Relevância na Pesquisa
66.25%
When agents are acting together, they may need a simple mechanism to decide on joint actions. One possibility is to have the agents express their preferences in the form of a ballot and use a voting rule to decide the winning action(s). Unfortunately, agents may try to manipulate such an election by misreporting their preferences. Fortunately, it has been shown that it is NP-hard to compute how to manipulate a number of different voting rules. However, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. To address this issue, I suggest studying empirically if computational complexity is in practice a barrier to manipulation. The basic tool used in my investigations is the identification of computational "phase transitions". Such an approach has been fruitful in identifying hard instances of propositional satisfiability and other NP-hard problems. I show that phase transition behaviour gives insight into the hardness of manipulating voting rules, increasing concern that computational complexity is indeed any sort of barrier. Finally, I look at the problem of computing manipulation of other, related problems like stable marriage and tournament problems.; Comment: To appear in Proceedings of 11th International Workshop on Computational Logic in Multi-Agent Systems (CLIMA XI 2010)

Computational complexity of the quantum separability problem

Ioannou, Lawrence M.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
66.2%
Ever since entanglement was identified as a computational and cryptographic resource, researchers have sought efficient ways to tell whether a given density matrix represents an unentangled, or separable, state. This paper gives the first systematic and comprehensive treatment of this (bipartite) quantum separability problem, focusing on its deterministic (as opposed to randomized) computational complexity. First, I review the one-sided tests for separability, paying particular attention to the semidefinite programming methods. Then, I discuss various ways of formulating the quantum separability problem, from exact to approximate formulations, the latter of which are the paper's main focus. I then give a thorough treatment of the problem's relationship with the complexity classes NP, NP-complete, and co-NP. I also discuss extensions of Gurvits' NP-hardness result to strong NP-hardness of certain related problems. A major open question is whether the NP-contained formulation (QSEP) of the quantum separability problem is Karp-NP-complete; QSEP may be the first natural example of a problem that is Turing-NP-complete but not Karp-NP-complete. Finally, I survey all the proposed (deterministic) algorithms for the quantum separability problem...

Physical portrayal of computational complexity

Annila, Arto
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 05/06/2009
Relevância na Pesquisa
66.23%
Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because many natural processes have been recognized to complete in non-polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable because, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving problems in the class NP, decisions will affect subsequently available sets of decisions. The state space of a non-deterministic finite automaton is evolving due to the computation itself hence it cannot be efficiently contracted using a deterministic finite automaton that will arrive at a solution in super-polynomial time. The solution of the NP problem itself is verifiable in polynomial time (P) because the corresponding state is stationary. Likewise the class P set of states does not depend on computational history hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the class P set of states is inherently smaller than the set of class NP. Since the computational time to contract a given set is proportional to dissipation...

Computational Complexity in Electronic Structure

Whitfield, James D.; Love, Peter J.; Aspuru-Guzik, Alan
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 16/08/2012
Relevância na Pesquisa
66.2%
In quantum chemistry, the price paid by all known efficient model chemistries is either the truncation of the Hilbert space or uncontrolled approximations. Theoretical computer science suggests that these restrictions are not mere shortcomings of the algorithm designers and programmers but could stem from the inherent difficulty of simulating quantum systems. Extensions of computer science and information processing exploiting quantum mechanics has led to new ways of understanding the ultimate limitations of computational power. Interestingly, this perspective helps us understand widely used model chemistries in a new light. In this article, the fundamentals of computational complexity will be reviewed and motivated from the vantage point of chemistry. Then recent results from the computational complexity literature regarding common model chemistries including Hartree-Fock and density functional theory are discussed.; Comment: 14 pages, 2 figures, 1 table. Comments welcome

Mathematical foundations of modern cryptography: computational complexity perspective

Goldwasser, Shafi
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 30/11/2002
Relevância na Pesquisa
66.2%
Theoretical computer science has found fertile ground in many areas of mathematics. The approach has been to consider classical problems through the prism of computational complexity, where the number of basic computational steps taken to solve a problem is the crucial qualitative parameter. This new approach has led to a sequence of advances, in setting and solving new mathematical challenges as well as in harnessing discrete mathematics to the task of solving real-world problems. In this talk, I will survey the development of modern cryptography -- the mathematics behind secret communications and protocols -- in this light. I will describe the complexity theoretic foundations underlying the cryptographic tasks of encryption, pseudo-randomness number generators and functions, zero knowledge interactive proofs, and multi-party secure protocols. I will attempt to highlight the paradigms and proof techniques which unify these foundations, and which have made their way into the mainstream of complexity theory.

Statistical mechanics of classical and quantum computational complexity

Laumann, C. R.; Moessner, R.; Scardicchio, A.; Sondhi, S. L.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 08/09/2010
Relevância na Pesquisa
66.2%
The quest for quantum computers is motivated by their potential for solving problems that defy existing, classical, computers. The theory of computational complexity, one of the crown jewels of computer science, provides a rigorous framework for classifying the hardness of problems according to the computational resources, most notably time, needed to solve them. Its extension to quantum computers allows the relative power of quantum computers to be analyzed. This framework identifies families of problems which are likely hard for classical computers (``NP-complete'') and those which are likely hard for quantum computers (``QMA-complete'') by indirect methods. That is, they identify problems of comparable worst-case difficulty without directly determining the individual hardness of any given instance. Statistical mechanical methods can be used to complement this classification by directly extracting information about particular families of instances---typically those that involve optimization---by studying random ensembles of them. These pose unusual and interesting (quantum) statistical mechanical questions and the results shed light on the difficulty of problems for large classes of algorithms as well as providing a window on the contrast between typical and worst case complexity. In these lecture notes we present an introduction to this set of ideas with older work on classical satisfiability and recent work on quantum satisfiability as primary examples. We also touch on the connection of computational hardness with the physical notion of glassiness.; Comment: 25 pages...

On the Computational Complexity of Defining Sets

Hatami, Hamed; Maserrat, Hossein
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 31/12/2006
Relevância na Pesquisa
66.23%
Suppose we have a family ${\cal F}$ of sets. For every $S \in {\cal F}$, a set $D \subseteq S$ is a {\sf defining set} for $({\cal F},S)$ if $S$ is the only element of $\cal{F}$ that contains $D$ as a subset. This concept has been studied in numerous cases, such as vertex colorings, perfect matchings, dominating sets, block designs, geodetics, orientations, and Latin squares. In this paper, first, we propose the concept of a defining set of a logical formula, and we prove that the computational complexity of such a problem is $\Sigma_2$-complete. We also show that the computational complexity of the following problem about the defining set of vertex colorings of graphs is $\Sigma_2$-complete: {\sc Instance:} A graph $G$ with a vertex coloring $c$ and an integer $k$. {\sc Question:} If ${\cal C}(G)$ be the set of all $\chi(G)$-colorings of $G$, then does $({\cal C}(G),c)$ have a defining set of size at most $k$? Moreover, we study the computational complexity of some other variants of this problem.

Why Philosophers Should Care About Computational Complexity

Aaronson, Scott
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
66.17%
One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.; Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and beyond," MIT Press, 2012. Some minor clarifications and corrections; new references added

Computational Complexity of Interactive Behaviors

Lago, Ugo Dal; Heindel, Tobias; Mazza, Damiano; Varacca, Daniele
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 04/09/2012
Relevância na Pesquisa
66.17%
The theory of computational complexity focuses on functions and, hence, studies programs whose interactive behavior is reduced to a simple question/answer pattern. We propose a broader theory whose ultimate goal is expressing and analyzing the intrinsic difficulty of fully general interactive behaviors. To this extent, we use standard tools from concurrency theory, including labelled transition systems (formalizing behaviors) and their asynchronous extension (providing causality information). Behaviors are implemented by means of a multiprocessor machine executing CCS-like processes. The resulting theory is shown to be consistent with the classical definitions: when we restrict to functional behaviors (i.e., question/answer patterns), we recover several standard computational complexity classes.; Comment: 17 pages

Proceedings International Workshop on Developments in Implicit Computational complExity

Baillot, Patrick
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 04/05/2010
Relevância na Pesquisa
66.28%
This volume contains the proceedings of the International Workshop on Developments in Implicit Computational complExity (DICE 2010), which took place on March 27-28 2010 in Paphos, Cyprus, as a satellite event of the Joint European Conference on Theory and Practice of Software, ETAPS 2010. Implicit Computational Complexity aims at studying computational complexity without referring to external measuring conditions or particular machine models, but instead by considering restrictions on programming languages or logical principles implying complexity properties. The aim of this workshop was to bring together researchers working on implicit computational complexity, from its logical and semantical aspects to those related to the static analysis of programs, so as to foster their interaction and to give newcomers an overview of the current trends in this area.

Computational complexity of solving polynomial differential equations over unbounded domains

Pouly, Amaury; Graça, Daniel S.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
66.2%
In this paper we investigate the computational complexity of solving ordinary differential equations (ODEs) $y^{\prime}=p(y)$ over \emph{unbounded time domains}, where $p$ is a vector of polynomials. Contrarily to the bounded (compact) time case, this problem has not been well-studied, apparently due to the "intuition" that it can always be reduced to the bounded case by using rescaling techniques. However, as we show in this paper, rescaling techniques do not seem to provide meaningful insights on the complexity of this problem, since the use of such techniques introduces a dependence on parameters which are hard to compute. We present algorithms which numerically solve these ODEs over unbounded time domains. These algorithms have guaranteed precision, i.e. given some arbitrarily large time $t$ and error bound $\varepsilon$ as input, they will output a value $\tilde{y}$ which satisfies $\|y(t)-\tilde{y}\|\leq\varepsilon$. We analyze the complexity of these algorithms and show that they compute $\tilde{y}$ in time polynomial in several quantities including the time $t$, the precision of the output $\varepsilon$ and the length of the curve $y$ from $0$ to $t$, assuming it exists until time $t$. We consider both algebraic complexity and bit complexity. As far as we know...

Towards Understanding the Predictability of Stock Markets from the Perspective of Computational Complexity

Aspnes, James; Fischer, David F.; Fischer, Michael J.; Kao, Ming-Yang; Kumar, Alok
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
66.22%
This paper initiates a study into the century-old issue of market predictability from the perspective of computational complexity. We develop a simple agent-based model for a stock market where the agents are traders equipped with simple trading strategies, and their trades together determine the stock prices. Computer simulations show that a basic case of this model is already capable of generating price graphs which are visually similar to the recent price movements of high tech stocks. In the general model, we prove that if there are a large number of traders but they employ a relatively small number of strategies, then there is a polynomial-time algorithm for predicting future price movements with high accuracy. On the other hand, if the number of strategies is large, market prediction becomes complete in two new computational complexity classes CPP and BCPP, which are between P^NP[O(log n)] and PP. These computational completeness results open up a novel possibility that the price graph of an actual stock could be sufficiently deterministic for various prediction goals but appear random to all polynomial-time prediction algorithms.; Comment: The conference version will appear in SODA 2001

On the computational complexity of a game of cops and robbers

Mamino, Marcello
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
66.17%
We study the computational complexity of a perfect-information two-player game proposed by Aigner and Fromme. The game takes place on an undirected graph where n simultaneously moving cops attempt to capture a single robber, all moving at the same speed. The players are allowed to pick their starting positions at the first move. The question of the computational complexity of deciding this game was raised in the '90s by Goldstein and Reingold. We prove that the game is hard for PSPACE.; Comment: 15 pages, 2 figures. Final accepted version, to be published in Theoretical Computer Science

Computational Complexity and Phase Transitions

Istrate, Gabriel
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 31/05/2000
Relevância na Pesquisa
66.17%
Phase transitions in combinatorial problems have recently been shown to be useful in locating "hard" instances of combinatorial problems. The connection between computational complexity and the existence of phase transitions has been addressed in Statistical Mechanics and Artificial Intelligence, but not studied rigorously. We take a step in this direction by investigating the existence of sharp thresholds for the class of generalized satisfiability problems defined by Schaefer. In the case when all constraints are clauses we give a complete characterization of such problems that have a sharp threshold. While NP-completeness does not imply (even in this restricted case) the existence of a sharp threshold, it "almost implies" this, since clausal generalized satisfiability problems that lack a sharp threshold are either 1. polynomial time solvable, or 2. predicted, with success probability lower bounded by some positive constant by across all the probability range, by a single, trivial procedure.; Comment: A (slightly) revised version of the paper submitted to the 15th IEEE Conference on Computational Complexity

Computational Complexity Analysis of Simple Genetic Programming On Two Problems Modeling Isolated Program Semantics

Durrett, Greg; Neumann, Frank; O'Reilly, Una-May
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
66.17%
Analyzing the computational complexity of evolutionary algorithms for binary search spaces has significantly increased their theoretical understanding. With this paper, we start the computational complexity analysis of genetic programming. We set up several simplified genetic programming algorithms and analyze them on two separable model problems, ORDER and MAJORITY, each of which captures an important facet of typical genetic programming problems. Both analyses give first rigorous insights on aspects of genetic programming design, highlighting in particular the impact of accepting or rejecting neutral moves and the importance of a local mutation operator.; Comment: 26 pages

Proceedings Second Workshop on Developments in Implicit Computational Complexity

Marion, Jean-Yves
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 01/01/2012
Relevância na Pesquisa
66.28%
This volume contains the proceedings of the Second International Workshop on Developments in Implicit Computational complExity (DICE 2011), which took place on April 2-3 2011 in Saarbruecken, Germany, as a satellite event of the Joint European Conference on Theory and Practice of Software, ETAPS 2011. Implicit Computational Complexity aims at studying computational complexity without referring to external measuring conditions or particular machine models, but instead by considering restrictions on programming languages or logical principles implying complexity properties. The aim of this workshop was to bring together researchers working on implicit computational complexity, from its logical and semantics aspects to those related to the static analysis of programs, so as to foster their interaction and to give newcomers an overview of the current trends in this area. The first DICE workshop was held in 2010 at ETAPS and published in EPTCS, volume 23 (http://eptcs.org/content.cgi?DICE2010).; Comment: EPTCS 75, 2012