by Sriram K. Atreya.; Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING; Bibliography: leaves 101-103.
by Cheryl Ann Wheeler.; Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING; Includes bibliographical references.
The Bluetooth specification describes a robust and powerful technology for short-range wireless communication. Unfortunately, the specification is immense and complicated, presenting a formidable challenge for novice developers. Currently, there is a lack of satisfactory technical documentation describing Bluetooth application development and the parts of the Bluetooth specification that are relevant to software developers. This thesis explains Bluetooth programming in the context of Internet programming and shows how most concepts in Internet programming are easily translated to Bluetooth. It describes how these concepts can be implemented in the GNU/Linux operating system using the BlueZ Bluetooth protocol stack and libraries. A Python extension module is presented that was created to assist in rapid development and deployment of Bluetooth applications. Finally, an inexpensive and trivially deployed infrastructure for location aware computing is presented, with a series of experiments conducted to determine how best to exploit such an infrastructure.; by Albert Shuyu Huang.; Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.; Includes bibliographical references (p. 87-89).
This dissertation presents new constructions and security definitions related to three areas: authentication, cascadable and commutative crytpography, and private set operations. Existing works relevant to each of these areas fall into one of two categories: efficient solutions lacking formal proofs of security or provably-secure, but highly inefficient solutions. This work will bridge this gap by presenting new constructions and definitions that are both practical and provably-secure. The first contribution in the area of efficient authentication is a provably-secure authentication protocol named HB+. The HB+ protocol is efficient enough to be implemented on extremely low-cost devices, or even by a patient human with a coin to flip. The security of HB+ is based on the hardness of a long-standing learning problem that is closely related to coding theory. HB+ is the first authentication protocol that is both practical for low-cost devices, like radio frequency identification (RFID) tags, and provably secure against active adversaries. The second contribution of this work is a new framework for defining and proving the security of cascadable cryptosystems, specifically commutative cryptosystems.; (cont.) This new framework addresses a gap in existing security definitions that fail to handle cryptosystems where ciphertexts produced by cascadable encryption and decryption perations may contain some message-independent history. Several cryptosystems...
Javari is an extension of Java that supports reference immutability constraints. Programmers write Javari type qualifiers, such as the readonly type qualifier, in their programs, and the Javari typechecker detects mutation errors (incorrect side effects) or verifies their absence. While case studies have demonstrated the practicality and value of Javari, a barrier to usability remains in the development process. A Javari program will not typecheck unless all the references in the APIs of libraries it uses are annotated with Javari type qualifiers. Manually converting existing Java libraries to Javari is both tedious and error-prone; the development process requires an automated solution. This thesis presents an algorithm for statically inferring reference immutability in Javari. The flow-insensitive and context-sensitive algorithm is sound and produces a set of qualifiers that typecheck in Javari. The algorithm is precise in that it infers the most readonly qualifiers possible; adding any additional readonly qualifiers will cause the program to not typecheck. A tool, Javarifier, implements this algorithm in order to infer the Javari type qualifiers over a set of class files. Javarifier can also insert these qualifiers into the corresponding source code...
In this thesis, we designed and implemented a simulator that supports multiple robots within a dynamic environment. The goal of this tool is to provide a testing environment for navigational robots that run on the MOOS platform. The simulator is written in C++ and utilizes several open source libraries to create a virtual world for robots to interact with by faking sensor information. A design goal of this thesis has been to make the simulator versatile enough to be useful for a variety of robots, from land to marine. Such a tool is valuable in research because the cost of developing a custom simulator can consume too many man-hours. Reducing this cost by creating a generic and customizable simulator has been the main motivation behind this thesis. It has also been one of the major challenges behind the project.; by Greg H. Belote.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.; Includes bibliographical references (p. 39).
The increasing complexity of microsystem design mandates a distributed and collaborative design environment. The high integration levels call for tools and generators that allow exploration of the design space irrespective of the geographical or physical availability of the design tools. The World Wide Web serves as a desirable platform for distributed access to libraries, models and design tools. The rapid growth and acceptance of the World Wide Web has happened over the same time period in which distributed object systems have stabilized and matured. The Web can become an important platform for VLSI CAD, when the distributed object technologies (e.g, CORBA) are combined with the Web technologies (e.g., HTTP, CGI) and Web-aware object oriented languages (e.g., Java). In this thesis, a framework using the Object-Web technologies is presented, which enables distributed Web based CAD. The Object-Web architecture provides an open, interoperable and scalable distributed computing environment for microsystem design, in which Web based design tools can efficiently utilize the capabilities of existing design tools on the Web to build hierarchical Web tools. The framework includes the infrastructure to store and manipulate design objects, protocols for tool communication and WebTop...
Java 1.5 will include a type system (called JSR-14) that supports parametric polymorphism, or generic classes. This will bring many benefits to Java programmers, not least because current Java practise makes heavy use of logically-generic classes, including container classes. Translation of Java source code into semantically equivalent JSR-14 source code requires two steps: parameterisation (adding type parameters to class definitions) and instantiation (adding the type arguments at each use of a parameterised class). Parameterisation need be done only once for a class, whereas instantiation must be performed for each client, of which there are potentially many more. Therefore, this work focuses on the instantiation problem. We present a technique to determine sound and precise JSR-14 types at each use of a class for which a generic type specification is available. Our approach uses a precise and context-sensitive pointer analysis to determine possible types at allocation sites, and a set-constraint-based analysis (that incorporates guarded, or conditional, constraints) to choose consistent types for both allocation and declaration sites. The technique safely handles all features of the JSR-14 type system, notably the raw types (which provide backward compatibility) and 'unchecked' operations on them. We have implemented our analysis in a tool that automatically inserts type arguments into Java code...
We analyze software reuse from the perspective of information theory and
Kolmogorov complexity, assessing our ability to ``compress'' programs by
expressing them in terms of software components reused from libraries. A common
theme in the software reuse literature is that if we can only get the right
environment in place-- the right tools, the right generalizations, economic
incentives, a ``culture of reuse'' -- then reuse of software will soar, with
consequent improvements in productivity and software quality. The analysis
developed in this paper paints a different picture: the extent to which
software reuse can occur is an intrinsic property of a problem domain, and
better tools and culture can have only marginal impact on reuse rates if the
domain is inherently resistant to reuse. We define an entropy parameter $H \in
[0,1]$ of problem domains that measures program diversity, and deduce from this
upper bounds on code reuse and the scale of components with which we may work.
For ``low entropy'' domains with $H$ near 0, programs are highly similar to one
another and the domain is amenable to the Component-Based Software Engineering
(CBSE) dream of programming by composing large-scale components. For problem
domains with $H$ near 1...
We study the network structure of Wikipedia (restricted to its mathematical
portion), MathWorld, and DLMF. We approach these three online mathematical
libraries from the perspective of several global and local network-theoretic
features, providing for each one the appropriate value or distribution, along
with comparisons that, if possible, also include the whole of the Wikipedia or
the Web. We identify some distinguishing characteristics of all three
libraries, most of them supposedly traceable to the libraries' shared nature of
relating to a very specialized domain. Among these characteristics are the
presence of a very large strongly connected component in each of the
corresponding directed graphs, the complete absence of any clear power laws
describing the distribution of local features, and the rise to prominence of
some local features (e.g., stress centrality) that can be used to effectively
search for keywords in the libraries.
Synthesis is the automatic construction of a system from its specification.
In classical synthesis algorithms it is always assumed that the system is
"constructed from scratch" rather than composed from reusable components. This,
of course, rarely happens in real life. In real life, almost every non-trivial
commercial software system relies heavily on using libraries of reusable
components. Furthermore, other contexts, such as web-service orchestration, can
be modeled as synthesis of a system from a library of components.
In 2009 we introduced LTL synthesis from libraries of reusable components.
Here, we extend the work and study synthesis from component libraries with
"call and return"' control flow structure. Such control-flow structure is very
common in software systems. We define the problem of Nested-Words Temporal
Logic (NWTL) synthesis from recursive component libraries, where NWTL is a
specification formalism, richer than LTL, that is suitable for "call and
return" computations. We solve the problem, providing a synthesis algorithm,
and show the problem is 2EXPTIME-complete, as standard synthesis.; Comment: In Proceedings GandALF 2011, arXiv:1106.0814
We present a comparison of several modern C++ libraries providing high-level
interfaces for programming multi- and many-core architectures on top of CUDA or
OpenCL. The comparison focuses on the solution of ordinary differential
equations and is based on odeint, a framework for the solution of systems of
ordinary differential equations. Odeint is designed in a very flexible way and
may be easily adapted for effective use of libraries such as Thrust, MTL4,
VexCL, or ViennaCL, using CUDA or OpenCL technologies. We found that CUDA and
OpenCL work equally well for problems of large sizes, while OpenCL has higher
overhead for smaller problems. Furthermore, we show that modern high-level
libraries allow to effectively use the computational resources of many-core
GPUs or multi-core CPUs without much knowledge of the underlying technologies.; Comment: 21 pages, 4 figures, submitted to SIAM Journal of Scientific
Computing and accepted
This work introduces a runtime model for managing communication with support
for latency-hiding. The model enables non-computer science researchers to
exploit communication latency-hiding techniques seamlessly. For compiled
languages, it is often possible to create efficient schedules for
communication, but this is not the case for interpreted languages. By
maintaining data dependencies between scheduled operations, it is possible to
aggressively initiate communication and lazily evaluate tasks to allow maximal
time for the communication to finish before entering a wait state. We implement
a heuristic of this model in DistNumPy, an auto-parallelizing version of
numerical Python that allows sequential NumPy programs to run on distributed
memory architectures. Furthermore, we present performance comparisons for eight
benchmarks with and without automatic latency-hiding. The results shows that
our model reduces the time spent on waiting for communication as much as 27
times, from a maximum of 54% to only 2% of the total execution time, in a
stencil application.; Comment: PREPRINT
We present HONEI, an open-source collection of libraries offering a hardware
oriented approach to numerical calculations. HONEI abstracts the hardware, and
applications written on top of HONEI can be executed on a wide range of
computer architectures such as CPUs, GPUs and the Cell processor. We
demonstrate the flexibility and performance of our approach with two test
applications, a Finite Element multigrid solver for the Poisson problem and a
robust and fast simulation of shallow water waves. By linking against HONEI's
libraries, we achieve a twofold speedup over straight forward C++ code using
HONEI's SSE backend, and additional 3-4 and 4-16 times faster execution on the
Cell and a GPU. A second important aspect of our approach is that the full
performance capabilities of the hardware under consideration can be exploited
by adding optimised application-specific operations to the HONEI libraries.
HONEI provides all necessary infrastructure for development and evaluation of
such kernels, significantly simplifying their development.; Comment: 19 pages, 7 figures
MILJS is a collection of state-of-the-art, platform-independent, scalable,
library offering a matrix calculation is called Sushi, which exhibits far
better performance than any other leading machine learning libraries written in
Tempura is provided, which supports various algorithms widely used in machine
learning research. We also provide Soba as a visualization library. The
implementations of our libraries are clearly written, properly documented and
thus can are easy to get started with, as long as there is a web browser. These
libraries are available from http://mil-tokyo.github.io/ under the MIT license.
This paper shows an application of the theory of sorting networks to
facilitate the synthesis of optimized general purpose sorting libraries.
Standard sorting libraries are often based on combinations of the classic
Quicksort algorithm with insertion sort applied as the base case for small
fixed numbers of inputs. Unrolling the code for the base case by ignoring loop
conditions eliminates branching and results in code which is equivalent to a
sorting network. This enables the application of further program
transformations based on sorting network optimizations, and eventually the
synthesis of code from sorting networks. We show that if considering the number
of comparisons and swaps then theory predicts no real advantage of this
approach. However, significant speed-ups are obtained when taking advantage of
instruction level parallelism and non-branching conditional assignment
instructions, both of which are common in modern CPU architectures. We provide
empirical evidence that using code synthesized from efficient sorting networks
as the base case for Quicksort libraries results in significant real-world
speed-ups.; Comment: IMADA-preprint-cs
Dense linear algebra libraries, such as BLAS and LAPACK, provide a relevant
collection of numerical tools for many scientific and engineering applications.
While there exist high performance implementations of the BLAS (and LAPACK)
functionality for many current multi-threaded architectures,the adaption of
these libraries for asymmetric multicore processors (AMPs)is still pending. In
this paper we address this challenge by developing an asymmetry-aware
implementation of the BLAS, based on the BLIS framework, and tailored for AMPs
equipped with two types of cores: fast/power hungry versus slow/energy
efficient. For this purpose, we integrate coarse-grain and fine-grain
parallelization strategies into the library routines which, respectively,
dynamically distribute the workload between the two core types and statically
repartition this work among the cores of the same type.
Our results on an ARM big.LITTLE processor embedded in the Exynos 5422 SoC,
using the asymmetry-aware version of the BLAS and a plain migration of the
legacy version of LAPACK, experimentally assess the benefits, limitations, and
potential of this approach.
One of the most promising applications of mathematical knowledge management
is search: Even if we restrict attention to the tiny fragment of mathematics
that has been formalized, the amount exceeds the comprehension of an individual
Based on the generic representation language MMT, we introduce the
mathematical query language QMT: It combines simplicity, expressivity, and
scalability while avoiding a commitment to a particular logical formalism. QMT
can integrate various search paradigms such as unification, semantic web, or
XQuery style queries, and QMT queries can span different mathematical
We have implemented QMT as a part of the MMT API. This combination provides a
scalable indexing and query engine that can be readily applied to any library
of mathematical knowledge. While our focus here is on libraries that are
available in a content markup language, QMT naturally extends to presentation
and narration markup languages.
Many proof assistant libraries contain formalizations of the same
mathematical concepts. The concepts are often introduced (defined) in different
ways, but the properties that they have, and are in turn formalized, are the
same. For the basic concepts, like natural numbers, matching them between
libraries is often straightforward, because of mathematical naming conventions.
However, for more advanced concepts, finding similar formalizations in
different libraries is a non-trivial task even for an expert.
In this paper we investigate automatic discovery of similar concepts across
libraries of proof assistants. We propose an approach for normalizing
properties of concepts in formal libraries and a number of similarity measures.
We evaluate the approach on HOL based proof assistants HOL4, HOL Light and
Isabelle/HOL, discovering 398 pairs of isomorphic constants and types.