Página 1 dos resultados de 2854 itens digitais encontrados em 0.051 segundos

Quantum signal processing

Eldar, Yonina Chana, 1973-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 346 p.; 3228057 bytes; 3227811 bytes; application/pdf; application/pdf
ENG
Relevância na Pesquisa
65.64%
Quantum signal processing (QSP) as formulated in this thesis, borrows from the formalism and principles of quantum mechanics and some of its interesting axioms and constraints, leading to a novel paradigm for signal processing with applications in areas ranging from frame theory, quantization and sampling methods to detection, parameter estimation, covariance shaping and multiuser wireless communication systems. The QSP framework is aimed at developing new or modifying existing signal processing algorithms by drawing a parallel between quantum mechanical measurements and signal processing algorithms, and by exploiting the rich mathematical structure of quantum mechanics, but not requiring a physical implementation based on quantum mechanics. This framework provides a unifying conceptual structure for a variety of traditional processing techniques, and a precise mathematical setting for developing generalizations and extensions of algorithms. Emulating the probabilistic nature of quantum mechanics in the QSP framework gives rise to probabilistic and randomized algorithms. As an example we introduce a probabilistic quantizer and derive its statistical properties. Exploiting the concept of generalized quantum measurements we develop frame-theoretical analogues of various quantum-mechanical concepts and results...

Ultralow-noise modelocked lasers

Jiang, Leaf Alden, 1976-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 357 p.; 6679356 bytes; 6678883 bytes; application/pdf; application/pdf
ENG
Relevância na Pesquisa
65.62%
The measurement, design, and theory of ultralow-noise actively modelocked lasers are presented. We demonstrate quantum-limited noise performance of a hybridly modelocked semiconductor laser with an rms timing jitter of only 47 fs (10 Hz to 10 MHz) and 86 fs (10 Hz to 4.5 GHz). The daunting task of measuring ultralow-noise levels is solved by a combined use of microwave and optical measurement techniques that yield complete characterization of the laser noise from DC to half the laser repetition rate. Optical cross-correlation techniques are shown to be a useful tool for quantifying fast noise processes, isolating the timing jitter noise component, measuring timing jitter asymmetries, and measuring correlations of pulses in harmonically modelocked lasers. A noise model for harmonically modelocked lasers is presented that illustrates how to correctly interpret the amplitude noise and timing jitter from microwave measurements. Using information about the supermodes, the amplitude and timing noise can be quantified independently, thereby making it possible to measure the noise of harmonically modelocked lasers with multi-gigahertz repetition rates. Methods to further reduce the noise of a modelocked laser are explored. We demonstrate that photon seeding is effective at reducing the noise of a modelocked semiconductor laser without increasing the pulse width. Experimental demonstrations of a timing jitter eater...

Terahertz quantum cascade lasers; Terahertz QCLs

Williams, Benjamin S. (Benjamin Stanford), 1974-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 310 p.; 3960932 bytes; 4972623 bytes; application/pdf; application/pdf
ENG
Relevância na Pesquisa
65.62%
The development of the terahertz frequency range has long been impeded by the relative dearth of compact, coherent radiation sources of reasonable power. This thesis details the development of quantum cascade lasers (QCLs) that operate in the terahertz with photon energies below the semiconductor Reststrahlen band. Photons are emitted via electronic intersubband transitions that take place entirely within the conduction band, where the wavelength is chosen by engineering the well and barrier widths in multiple-quantum-well heterostructures. Fabrication of such long wavelength lasers has traditionally been challenging, since it is difficult to obtain a population inversion between such closely spaced energy levels, and because traditional dielectric waveguides become extremely lossy due to free carrier absorption. This thesis reports the development of terahertz QCLs in which the lower radiative state is depopulated via resonant longitudinal-optical phonon scattering. This mechanism is efficient and temperature insensitive, and provides protection from thermal backfilling due to the large energy separation between the lower radiative state and the injector. Both properties are important in allowing higher temperature operation at longer wavelengths. Lasers using a surface plasmon based waveguide grown on a semi-insulating (SI) GaAs substrate were demonstrated at 3.4 THz in pulsed mode up to 87 K...

Optimal dynamic routing in an unreliable queuing system

Tsitsiklis, John N
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 141 [i.e. 140] leaves; 6967694 bytes; 6967457 bytes; application/pdf; application/pdf
ENG
Relevância na Pesquisa
65.62%
by John Nikolaos Tsitsiklis.; Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1981.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; Bibliography: leaves 120-124.

Iterative algorithms for optimal signal reconstruction and parameter identification given noisy and incomplete data

Musicus, Bruce R
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 2 v. (466 leaves); 35604417 bytes; 35604170 bytes; application/pdf; application/pdf
ENG
Relevância na Pesquisa
65.62%
by Bruce R. Musicus.; Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; Vita.; Includes bibliographical references.

Stochastic processes on graphs with cycles : geometric and variational approaches

Wainwright, Martin J. (Martin James), 1973-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 271 leaves; 21824527 bytes; 21824286 bytes; application/pdf; application/pdf
ENG
Relevância na Pesquisa
65.67%
Stochastic processes defined on graphs arise in a tremendous variety of fields, including statistical physics, signal processing, computer vision, artificial intelligence, and information theory. The formalism of graphical models provides a useful language with which to formulate fundamental problems common to all of these fields, including estimation, model fitting, and sampling. For graphs without cycles, known as trees, all of these problems are relatively well-understood, and can be solved efficiently with algorithms whose complexity scales in a tractable manner with problem size. In contrast, these same problems present considerable challenges in general graphs with cycles. The focus of this thesis is the development and analysis of methods, both exact and approximate, for problems on graphs with cycles. Our contributions are in developing and analyzing techniques for estimation, as well as methods for computing upper and lower bounds on quantities of interest (e.g., marginal probabilities; partition functions). In order to do so, we make use of exponential representations of distributions, as well as insight from the associated information geometry and Legendre duality. Our results demonstrate the power of exponential representations for graphical models...

Energy allocation and transmission scheduling for wireless and space communications

Fu, Alvin C
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 164 p.; 4820030 bytes; 4819839 bytes; application/pdf; application/pdf
ENG
Relevância na Pesquisa
65.64%
This thesis presents two innovations to geophysical inversion. The first provides a framework and an algorithm for combining linear deconvolution methods with geostatistical interpolation techniques. This allows for sparsely sampled data to aid in image deblurring problems, or, conversely, noisy and blurred data to aid in sample interpolation. In order to overcome difficulties arising from high dimensionality, the solution must be derived in the correct framework and the structure of the problem must be exploited by an iterative solution algorithm. The effectiveness of the method is demonstrated first on a synthetic problem involving satellite remotely sensed data, and then on a real 3-D seismic data set combined with well logs. The second innovation addresses how to use wavelets in a linear geophysical inverse problem. Wavelets have lead to great successes in image compression and denoising, so it is interesting to see what, if anything, they can do for a general linear inverse problem. It is shown that a simple nonlinear operation of weighting and thresholding wavelet coefficients can consistently outperform classical linear inverse methods in terms of mean-square error across a broad range of noise magnitude in the data. Wavelets allow for an adaptively smoothed solution: smoothed more in uninteresting regions...

Approaches to multi-agent learning

Chang, Yu-Han, Ph. D., Massachusetts Institute of Technology
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 171 leaves; 9090627 bytes; 9097798 bytes; application/pdf; application/pdf
ENG
Relevância na Pesquisa
65.62%
Systems involving multiple autonomous entities are becoming more and more prominent. Sensor networks, teams of robotic vehicles, and software agents are just a few examples. In order to design these systems, we need methods that allow our agents to autonomously learn and adapt to the changing environments they find themselves in. This thesis explores ideas from game theory, online prediction, and reinforcement learning, tying them together to work on problems in multi-agent learning. We begin with the most basic framework for studying multi-agent learning: repeated matrix games. We quickly realize that there is no such thing as an opponent-independent, globally optimal learning algorithm. Some form of opponent assumptions must be necessary when designing multi-agent learning algorithms. We first show that we can exploit opponents that satisfy certain assumptions, and in a later chapter, we show how we can avoid being exploited ourselves. From this beginning, we branch out to study more complex sequential decision making problems in multi-agent systems, or stochastic games. We study environments in which there are large numbers of agents, and where environmental state may only be partially observable.; (cont.) In fully cooperative situations...

Robust stability and contraction analysis of nonlinear systems via semidefinite optimization

Aylward, Erin M
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 110 p.
ENG
Relevância na Pesquisa
65.62%
A wide variety of stability and performance problems for linear and certain classes of nonlinear dynamical systems can be formulated as convex optimization problems involving linear matrix inequalities (LMIs). These formulations can be solved numerically with computationally-effcient interior-point methods. Many of the first LMI-based stability formulations applied to linear systems and the class of nonlinear systems representable as an interconnection of a linear system with bounded uncertainty blocks. Recently, stability and performance analyses of more general nonlinear deterministic systems, namely those with polynomial or rational dynamics, have been converted into an LMI framework using sum of squares (SOS) programming. SOS programming combines elements of computational algebra and convex optimization to provide e±cient convex relaxations for various computationally-hard problems. In this thesis we extend the class of systems that can be analyzed with LMI-based methods.; (cont.) We show how to analyze the robust stability properties of uncertain non-linear systems with polynomial or rational dynamics, as well as a class of systems with external inputs, via contraction analysis and SOS programming. Specifically, we show how contraction analysis...

Pearls of Wisdom : technology for intentional reflection and learning in constructionist cooperatives; Technology for intentional reflection and learning in constructionist cooperatives

Chapman, Robbin Nicole, 1958-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 312 p.
ENG
Relevância na Pesquisa
65.68%
At the core of the constructionist learning paradigm is the idea that people learn through design experiences. However, in most settings, learners rarely revisit their work to reflect on design and learning processes. The practice of reflection is not integrated into regular community practice. That omission results in lost opportunities for deeper learning because reflection plays an important role in knowledge integration. In order to leverage the benefits of constructionist learning, learners must go beyond the activities of construction and reflect on their learning. This involves examining and gaining a deeper understanding of the how and why of their design process, including learning strategies. The conceptual framework of this dissertation, Cooperative Constructionism, establishes a design approach to reflection with a set of tools and methods that support reflection on learning. A Constructionist Cooperative is a community of learners where articulating and sharing of learning experiences is a regular practice. A goal of this dissertation is to explore the computational tools and practices that promote and support such activities. Using these tools, learners construct intentional-reflective artifacts, which embody their reflection on their design and learning experiences.; (cont.) There were two learning scaffolds developed to promote emergence of a Constructionist Cooperative. The first is a computational scaffold...

Risk and robust optimization

Brown, David Benjamin, Ph. D. Massachusetts Institute of Technology
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 213 p.
ENG
Relevância na Pesquisa
65.62%
This thesis develops and explores the connections between risk theory and robust optimization. Specifically, we show that there is a one-to-one correspondence between a class of risk measures known as coherent risk measures and uncertainty sets in robust optimization. An important consequence of this is that one may construct uncertainty sets, which are the critical primitives of robust optimization, using decision-maker risk preferences. In addition, we show some results on the geometry of such uncertainty sets. We also consider a more general class of risk measures known as convex risk measures, and show that these risk measures lead to a more flexible approach to robust optimization. In particular, these models allow one to specify not only the values of the uncertain parameters for which feasibility should be ensured, but also the degree of feasibility. We show that traditional, robust optimization models are a special case of this framework. As a result, this framework implies a family of probability guarantees on infeasibility at different levels, as opposed to standard, robust approaches which generally imply a single guarantee.; (cont.) Furthermore, we illustrate the performance of these risk measures on a real-world portfolio optimization application and show promising results that our methodology can...

Generalized Volterra-Wiener and surrogate data methods for complex time series analysis

Shashidhar, Akhil
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 150 leaves
ENG
Relevância na Pesquisa
75.66%
This thesis describes the current state-of-the-art in nonlinear time series analysis, bringing together approaches from a broad range of disciplines including the non-linear dynamical systems, nonlinear modeling theory, time-series hypothesis testing, information theory, and self-similarity. We stress mathematical and qualitative relationships between key algorithms in the respective disciplines in addition to describing new robust approaches to solving classically intractable problems. Part I presents a comprehensive review of various classical approaches to time series analysis from both deterministic and stochastic points of view. We focus on using these classical methods for quantification of complexity in addition to proposing a unified approach to complexity quantification encapsulating several previous approaches. Part II presents robust modern tools for time series analysis including surrogate data and Volterra-Wiener modeling. We describe new algorithms converging the two approaches that provide both a sensitive test for nonlinear dynamics and a noise-robust metric for chaos intensity.; by Akhil Shashidhar.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.; Includes bibliographical references (leaves 133-150).

Learning coupled conditional random field for image decomposition : theory and application in object categorization

Ma, Xiaoxu
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 180 p.
ENG
Relevância na Pesquisa
75.58%
The goal of this thesis is to build a computational system that is able to identify object categories within images. To this end, this thesis proposes a computational model of "recognition-through-decomposition-and-fusion" based on the psychophysical theories of information dissociation and integration in human visual perception. At the lowest level, contour and texture processes are defined and measured. In the mid-level, a novel coupled Conditional Random Field model is proposed to model and decompose the contour and texture processes in natural images. Various matching schemes are introduced to match the decomposed contour and texture channels in a dissociative manner. As a counterpart to the integrative process in the human visual system, adaptive combination is applied to fuse the perception in the decomposed contour and texture channels. The proposed coupled Conditional Random Field model is shown to be an important extension of popular single-layer Random Field models for modeling image processes, by dedicating a separate layer of random field grid to each individual image process and capturing the distinct properties of multiple visual processes. The decomposition enables the system to fully leverage each decomposed visual stimulus to its full potential in discriminating different object classes. Adaptive combination of multiple visual cues well mirrors the fact that different visual cues play different roles in distinguishing various object classes. Experimental results demonstrate that the proposed computational model of "recognition-through-decomposition-and-fusion" achieves better performance than most of the state-of-the-art methods in recognizing the objects in Caltech-101...

Research in Computer Science and Computer Engineering

Feldman, J. A. ; Merriam, C. W.
Fonte: University of Rochester. Computer Science Department. Publicador: University of Rochester. Computer Science Department.
Tipo: Relatório
ENG
Relevância na Pesquisa
85.68%
This report describes many of the computer related research efforts at the University of Rochester. The Department of Computer Science is involved in research in automatic programming, including very high level languages and data structures; machine perception; and in problem solving using combinations of traditional heuristic methods, artificial intelligence,and utility theory. The research of the Department of Electrical Engineering includes basic computer engineering research in the construction of computer systems and operating systems, research in image processing and in numerical methods, and research in production automation which is concerned with mechanical manufacturing and assembly, and is currently developing mathematical models of parts, raw materials and tools. In conjunction with other departments, Electrical Engineering is also using computers for biomedical applications including ultrasound diagnostic techniques for heart disease, and pattern recognition techniques for detection of cancer from PAP smears.

Research in Computer Science and Computer Engineering

Feldman, J. A. ; Merriam, C. W.
Fonte: University of Rochester. Computer Science Department. Publicador: University of Rochester. Computer Science Department.
Tipo: Relatório
ENG
Relevância na Pesquisa
85.68%
This report describes many of the computer related research efforts at the University of Rochester. The Department of Computer Science is involved in research in automatic programming, including very high level languages and data structures; machine perception; and in problem solving using combinations of traditional heuristic methods, artificial intelligence,and utility theory. The research of the Department of Electrical Engineering includes basic computer engineering research in the construction of computer systems and operating systems, research in image processing and in numerical methods, and research in production automation which is concerned with mechanical manufacturing and assembly, and is currently developing mathematical models of parts, raw materials and tools. In conjunction with other departments, Electrical Engineering is also using computers for biomedical applications including ultrasound diagnostic techniques for heart disease, and pattern recognition techniques for detection of cancer from PAP smears.

Research in Computer Science and Computer Engineering

Feldman, J. A. ; Merriam, C. W.
Fonte: University of Rochester. Computer Science Department. Publicador: University of Rochester. Computer Science Department.
Tipo: Relatório
ENG
Relevância na Pesquisa
65.66%
This report describes many of the computer related research efforts at the University of Rochester. The Department of Computer Science is involved in research in automatic programming, including very high level languages and data structures; machine perception; and in problem solving using combinations of traditional heuristic methods, artificial intelligence, and utility theory. The research of the Department of Electrical Engineering includes basic computer engineering research in the construction of computer systems and operating systems, research in image processing and in numerical methods, and research in production automation which is concerned with mechanical manufacturing and assembly, and is currently developing mathematical models of parts, raw materials and tools. In conjunction with other departments, Electrical Engineering is also using computers for biomedical applications including ultrasound diagnostic techniques for heart disease, and pattern recognition techniques for detection of cancer from PAP smears.

Research in Computer Science and Computer Engineering

Feldman, J. A. ; Merriam, C. W.
Fonte: University of Rochester. Computer Science Department. Publicador: University of Rochester. Computer Science Department.
Tipo: Relatório
ENG
Relevância na Pesquisa
65.66%
This report describes many of the computer related research efforts at the University of Rochester. The Department of Computer Science is involved in research in automatic programming, including very high level languages and data structures; machine perception; and in problem solving using combinations of traditional heuristic methods, artificial intelligence, and utility theory. The research of the Department of Electrical Engineering includes basic computer engineering research in the construction of computer systems and operating systems, research in image processing and in numerical methods, and research in production automation which is concerned with mechanical manufacturing and assembly, and is currently developing mathematical models of parts, raw materials and tools. In conjunction with other departments, Electrical Engineering is also using computers for biomedical applications including ultrasound diagnostic techniques for heart disease, and pattern recognition techniques for detection of cancer from PAP smears.

Research in Computer Science and Computer Engineering

Feldman, J. A. ; Merriam, C. W.
Fonte: University of Rochester. Computer Science Department. Publicador: University of Rochester. Computer Science Department.
Tipo: Relatório
ENG
Relevância na Pesquisa
65.66%
This report describes many of the computer related research efforts at the University of Rochester. The Department of Computer Science is involved in research in automatic programming, including very high level languages and data structures; machine perception; and in problem solving using combinations of traditional heuristic methods, artificial intelligence, and utility theory. The research of the Department of Electrical Engineering includes basic computer engineering research in the construction of computer systems and operating systems, research in image processing and in numerical methods, and research in production automation which is concerned with mechanical manufacturing and assembly, and is currently developing mathematical models of parts, raw materials and tools. In conjunction with other departments, Electrical Engineering is also using computers for biomedical applications including ultrasound diagnostic techniques for heart disease, and pattern recognition techniques for detection of cancer from PAP smears.

A theory and toolkit for the mathematics of privacy : methods for anonymizing data while minimizing information loss

Katirai, Hooman
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 86 leaves; 14904672 bytes; 14904307 bytes; application/pdf; application/pdf
ENG
Relevância na Pesquisa
75.62%
Privacy laws are an important facet of our society. But they can also serve as formidable barriers to medical research. The same laws that prevent casual disclosure of medical data have also made it difficult for researchers to access the information they need to conduct research into the causes of disease. But it is possible to overcome some of these legal barriers through technology. The US law known as HIPAA, for example, allows medical records to be released to researchers without patient consent if the records are provably anonymized prior to their disclosure. It is not enough for records to be seemingly anonymous. For example, one researcher estimates that 87.1% of the US population can be uniquely identified by the combination of their zip, gender, and date of birth - fields that most people would consider anonymous. One promising technique for provably anonymizing records is called k-anonymity. It modifies each record so that it matches k other individuals in a population - where k is an arbitrary parameter. This is achieved by, for example, changing specific information such as a date of birth, to a less specific counterpart such as a year of birth.; (cont.) Previous studies have shown that achieving k-anonymity while minimizing information loss is an NP-hard problem; thus a brute force search is out of the question for most real world data sets. In this thesis...

The Baire partial quasi-metric space: A mathematical tool for asymptotic complexity analysis in Computer Science

Cerdà-Uguet, M. A.; Schellekens, M. P.; Valero, O.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 30/09/2010
Relevância na Pesquisa
65.64%
In 1994, S.G. Matthews introduced the notion of partial metric space in order to obtain a suitable mathematical tool for program verification [Ann. New York Acad. Sci. 728 (1994), 183-197]. He gave an application of this new structure to parallel computing by means of a partial metric version of the celebrated Banach fixed point theorem [Theoret. Comput. Sci. 151 (1995), 195-205]. Later on, M.P. Schellekens introduced the theory of complexity (quasi-metric) spaces as a part of the development of a topological foundation for the asymptotic complexity analysis of programs and algorithms [Elec- tronic Notes in Theoret. Comput. Sci. 1 (1995), 211-232]. The applicability of this theory to the asymptotic complexity analysis of Divide and Conquer algorithms was also illustrated by Schellekens. In particular, he gave a new proof, based on the use of the aforenamed Banach fixed point theorem, of the well-known fact that Mergesort al- gorithm has optimal asymptotic average running time of computing. In this paper, motivated by the utility of partial metrics in Computer Science, we discuss whether the Matthews fixed point theorem is a suitable tool to analyze the asymptotic complexity of algorithms in the spirit of Schellekens. Specifically, we show that a slight modification of the well-known Baire partial metric on the set of all words over an alphabet constitutes an appropriate tool to carry out the asymptotic complexity analysis of algorithms via fixed point methods without the need for assuming the convergence condition inherent to the defini- tion of the complexity space in the Shellekens framework. Finally...