The University of Texas at Austin

CS Technical Abstracts (2011)


Srinath Setty(1), Andrew J. Blumberg(1), Michael Walfish(1). "Toward practical and unconditional verification of remote computations". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-01 (regular tech report). January 12th, 2011.




Amin Shali(1), Sidney Rosario(1), William R. Cook(1). "Hybrid Partial Evaluation". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-02 (regular tech report). January 14th, 2011. 23 pages.

We present hybrid partial evaluation (HPE), a pragmatic approach to partial evaluation that borrows ideas from both online and offline partial evaluation. HPE performs offline-style specialization us- ing an online approach without static binding time analysis. The goal of HPE is to provide a practical and predictable level of optimization for programmers, with an implementation strategy that fits well within ex- isting compilers or interpreters. HPE requires the programmer to specify where partial evaluation should be applied. It provides no termination guarantee and reports errors in situations that violate simple binding time rules, or have incorrect use of side effects in compile-time code. We formalize HPE for a small imperative object-oriented language and describe Civet, a straightforward implementation of HPE as a relatively simple extension of a Java compiler. Code optimized by Civet performs as well as and in some cases better than the output of a state-of-the-art offline partial evaluator.

DOWNLOAD TR-2011.pdf


Bryan Marker(1), Ernie Chan(1), Jack Poulson(1), Robert van de Geijn(1), Rob F. Van der Wijngaart(2), Timothy G. Mattson(2), Theodore E. Kubaska(2). "Programming Many-Core Architectures - A Case Study: Dense Matrix Computations on the Intel SCC Processor". The University of Texas at Austin, Department of Computer Sciences(1), Intel Corporation, Santa Clara, California 95054(2). Report# TR-11-03 (regular tech report). January 24th, 2011. 18 pages.

A message passing, distributed-memory parallel computer on a chip is one possible design for future, many-core architectures. We discuss initial experiences with the Intel Single-chip Cloud Computer research processor, which is a prototype architecture that incorporates 48 cores on a single die that can communicate via a small, shared, on-die buffer. The experiment is to port a state-of-the-art, distributed-memory, dense matrix library, Elemental, to this architecture and gain insight from the experience. We show that programmability addressed by this library, especially the proper abstraction for collective communication, greatly aids the porting effort. This enables us to support a wide range of functionality with limited changes to the library code.

DOWNLOAD TR-2012.pdf


Chinmayi Krishnappa(1), C. Greg Plaxton(1). "A Sealed-Bid Unit-Demand Auction with Put Options". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-04 (regular tech report). February 1st, 2011. 48 pages.

We introduce a variant of the classic sealed-bid unit-demand auction in which each item has an associated put option. The put option of an item is held by the seller of the item, and gives the holder the right to sell the item to a specified target bidder at a specified strike price, regardless of market conditions. Unexercised put options expire when the auction terminates. In keeping with the underlying unit-demand framework, we assume that each bidder is the target of at most one put option. The details of each put option --- holder, target, and strike price --- are fixed and publicly available prior to the submission of unit-demand bids. We motivate our unit-demand auction by discussing applications to the reassignment of leases, and to the design of multi-round auctions. In the classic sealed-bid unit-demand setting, the VCG mechanism provides a truthful auction with strong associated guarantees, including efficiency and envy-freedom. In our setting, the strike price of an item imposes a lower bound on its price. The VCG mechanism does not accommodate such lower bound constraints, and hence is not directly applicable. Moreover, the strong guarantees associated with the VCG mechanism in the classic setting cannot be achieved in our setting. We formulate an appropriate solution concept for our setting, and devise a truthful auction for implementing this solution concept. We show how to compute the outcome of our auction in polynomial time.

DOWNLOAD TR-2014.pdf


Jack Poulson(1), Robert A. van de Geijn(2), Jeffrey Bennighof(3). "Parallel Algorithms for Reducing the Generalized Hermitian-Definite Eigenvalue Problem". The University of Texas at Austin, Institute for Computational Engineering and Sciences(1), The University of Texas at Austin, Department of Computer Sciences(2), The University of Texas at Austin, Department of Aerospace Engineering and Engineering Mechanics(3). Report# TR-11-05 (regular tech report). February 15th, 2011. 13 pages.

We discuss the parallel implementation of two operations, A := inv(L) A inv(L)' and A := L' A L, that are important to the solution of dense generalized Hermitian-definite eigenproblems. Here A is Hermitian and L is lower triangular. We use the FLAME formalisms to derive and represent a family of algorithms and implement these using Elemental, a new C++ library for distributed memory architectures that may become the successor to the widely-used ScaLAPACK and PLAPACK libraries. It is shown that, provided the right algorithm is chosen, excellent performance is attained on a large cluster.

DOWNLOAD TR-2016.pdf


Cho-Jui Hsieh(1), Inderjit S. Dhillon(1). "Fast Coordinate Descent Methods with Variable Selection for Non-negative Matrix Factorization". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-06 (regular tech report). February 18th, 2011. 24 pages.

Nonnegative Matrix Factorization (NMF) is an effective dimension reduction method for non-negative dyadic data, and has proven to be useful in many ar- eas, such as text mining, bioinformatics and image processing. The mathemat- ical formulation of NMF is a constrained non-convex optimization problem, and many algorithms have been developed for solving it. Recently, a coordinate descent method, called FastHals [3], has been proposed to solve least squares NMF and is regarded as one of the state-of-the-art techniques for the problem. In this paper, we first show that FastHals has an inefficiency in that it uses a cyclic coordinate descent scheme and thus, performs unneeded descent steps on unimportant vari- ables. We then present a variable selection scheme that uses the gradient of the objective function to arrive at a new coordinate descent method. Our new method is considerably faster in practice and we show that it has theoretical convergence guarantees. Moreover when the solution is sparse, as is often the case in real appli- cations, our new method benefits by selecting important variables to update more often, thus resulting in higher speed. As an example, on a text dataset RCV1, our method is 7 times faster than FastHals, and more than 15 times faster when the sparsity is increased by adding an L1 penalty. We also develop new coordinate descent methods when error in NMF is measured by KL-divergence by applying the Newton method to solve the one-variable sub-problems. Experiments indicate that our algorithm for minimizing the KL-divergence is faster than the Lee & Seung multiplicative rule by a factor of 10 on the CBCL image dataset.

DOWNLOAD TR-2017.pdf


Taehwan Choi(1), H. B. Acharya(1), Mohamed G. Gouda(1). "Is That You? Authentication in a Network without Identities". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-07 (regular tech report). March 1st, 2011. 9 pages.

Most networks require that their users have ``identities", i.e. have names that are fixed for a relatively long time, unique, and have been approved by a central authority (in order to guarantee their uniqueness). Unfortunately, this requirement, which was introduced to simplify the design of networks, has its own drawbacks. First, this requirement can lead to the loss of anonymity of communicating users. Second, it can allow the possibility of identity theft. Third, it can lead some users to trust other users who may not be trust-worthy. In this paper, we argue that networks can be designed without user identities and their drawbacks. Our argument consists of providing answers to the following three questions. (1) How can one design a practical network where users do not have identities? (2) What does it mean for a user to authenticate another user in a network without identities? (3) How can one design a secure authentication protocol in a network without identities?

DOWNLOAD TR-2018.pdf


Srinivas Nedunuri(1), Douglas R. Smith(2), William R. Cook(1). "Cost-Based Learning for Planning". The University of Texas at Austin, Department of Computer Sciences(1), Kestrel Institute, Palo Alto, CA 94304(2). Report# TR-11-08 (regular tech report). March 4th, 2011.




Robert van de Geijn(1), Tyler Rhodes(1), Maggie Myers(1), Field G. Van Zee(1). "Deriving Linear Algebra Libraries". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-09 (regular tech report). March 14th, 2011. 11 pages.

Starting in the late 1960s computer scientists including Dijkstra and Hoare advocated goal-oriented programming and formal derivation of algorithms. The problem was that for loop-based programs, a priori determination of loop-invariants, a prerequisite for developing loops, was a task too complex for any but the simplest of operations. We believe that no practical progress was made in the field until around 2000 when we discovered how to systematically determine loop-invariants for operations in the domain of high-performance dense linear algebra libraries. This has led to a multitude of papers, mostly published in the ACM Transactions for Mathematical Software. It has yielded a system for mechanical derivation of algorithms and a high-performance linear algebra library, libflame, that is largely derived to be correct and includes more than a thousand algorithmic variants of algorithms for more than a hundred linear algebra operations. To our knowledge, this success story has unfolded without any awareness on the part the formal methods community. This paper is meant to raise that awareness.

DOWNLOAD TR-2021.pdf


Allen Clement(1). "UpRight Fault Tolerance". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-10 (dissertation). March 15th, 2011. 235 pages.

Experiences with computer systems indicate an inconvenient truth: computers fail and they fail in interesting ways. Although using redundancy to protect against fail- stop failures is common practice, non-fail-stop computer and network failures occur for a variety of reasons including power outage, disk or memory corruption, NIC malfunction, user error, operating system and application bugs or misconfiguration, and many others. The impact of these failures can be dramatic, ranging from service unavailability to stranding airplane passengers on the runway to companies closing. While high-stakes embedded systems have embraced Byzantine fault tolerant techniques, general purpose computing continues to rely on techniques that are fundamentally crash tolerant. In a general purpose environment, the current best practices response to non-fail-stop failures can charitably be described as pragmatic: identify a root cause and add checksums to prevent that error from happening again in the future. Pragmatic responses have proven effective for patching holes and protecting against faults once they have occurred; unfortunately the initial damage has already been done, and it is difficult to say if the patches made to address previous faults will protect against future failures. We posit that an end-to-end solution based on Byzantine fault tolerant (BFT) state machine replication is an efficient and deployable alternative to current ad hoc approaches favored in general purpose computing. The replicated state machine approach ensures that multiple copies of the same deterministic application execute requests in the same order and provides end-to-end assurance that independent transient failures will not lead to unavailability or incorrect responses. An efficient and effective end-to-end solution covers faults that have already been observed as well as failures that have not yet occurred, and it provides structural confidence that developers won’t have to track down yet another failure caused by some unpredicted memory, disk, or network behavior. While the promise of end-to-end failure protection is intriguing, significant technical and practical challenges currently prevent adoption in general purpose computing environments. On the technical side, it is important that end-to-end solutions maintain the performance characteristics of deployed systems: if end-to- end solutions dramatically increase computing requirements, dramatically reduce throughput, or dramatically increase latency during normal operation then end- to-end techniques are a non-starter. On the practical side, it is important that end-to-end approaches be both comprehensible and easy to incorporate: if the cost of end-to-end solutions is rewriting an application or trusting intricate and arcane protocols, then end-to-end solutions will not be adopted. In this thesis we show that BFT state machine replication can and be used in deployed systems. Reaching this goal requires us to address both the technical and practical challenges previously mentioned. We revisiting disparate research results from the last decade and tweak, refine, and revise the core ideas to fit together into a coherent whole. Addressing the practical concerns requires us to simplify the process of incorporating BFT techniques into legacy applications.

DOWNLOAD TR-2023.pdf


Uli Grasemann(1). "A Computational Model of Language Pathology in Schizophrenia". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-11 (dissertation). March 17th, 2011. 147 pages.

No current laboratory test can reliably identify patients with schizophrenia. Instead, key symptoms are observed via language, including derailment, where patients cannot follow a coherent storyline, and delusions, where false beliefs are repeated as fact. Brain processes underlying these and other symptoms remain unclear, and characterizing them would greatly enhance our understanding of schizophrenia. In this situation, computational models can be valuable tools to formulate testable hypotheses and to complement clinical research. This dissertation aims to capture the link between biology and schizophrenic symptoms using DISCERN, a connectionist model of human story processing. Competing illness mechanisms proposed to underlie schizophrenia are simulated in DISCERN, and are evaluated at the level of narrative language, the same level used to diagnose patients. The result is the first simulation of a speaker with schizophrenia. Of all illness models, hyperlearning, a model of overly intense memory consolidation, produced the best fit to patient data, as well as compelling models of delusions and derailments. If validated experimentally, the hyperlearning hypothesis could advance the current understanding of schizophrenia, and provide a platform for simulating the effects of future treatments.

DOWNLOAD TR-2024.pdf


Eric Rozner(1), Mi Kyung Han(1), Lili Qiu(1), Yin Zhang(1). "Model-Driven Optimization of Opportunistic Routing". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-12 (regular tech report). March 17th, 2011. 15 pages.

Opportunistic routing aims to improve wireless performance by exploiting communication opportunities arising by chance. A key challenge in opportunistic routing is how to achieve good, predictable performance despite the incidental nature of such communication opportunities and the complicated effects of wireless interference in IEEE 802.11 networks. To address the challenge, we develop a model-driven optimization framework to jointly optimize opportunistic routes and rate limits for both unicast and multicast traffic. A distinctive feature of our framework is that the performance derived from optimization can be achieved in a real IEEE 802.11 network. Our framework consists of three key components: (i) a model for capturing the interference among IEEE 802.11 broadcast transmissions, (ii) a novel algorithm for accurately optimizing different performance objectives, and (iii) effective techniques for mapping the resulting solutions to practical routing configurations. Extensive simulations and testbed experiments show that our approach significantly outperforms state-of-the-art shortest path routing and opportunistic routing protocols. Moreover, the difference between the achieved performance and our model estimation is typically within 20%. Evaluation in dynamic and uncontrolled environments further shows that our approach is robust against inaccuracy introduced by a dynamic network and it also consistently out-performs the existing schemes. These results clearly demonstrate the effectiveness and accuracy of our approach.

DOWNLOAD TR-2025.pdf


H. B. Acharya(1), Taehwan Choi(1), Rida A. Bazzi(2), Mohamed G. Gouda(1). "The K-Observer Problem in Computer Networks". The University of Texas at Austin, Department of Computer Sciences(1), Arizona State University, School of Computing and Informatics(2). Report# TR-11-13 (regular tech report). March 22nd, 2011. 14 pages.

For any non-negative integer K, a K-observer P of a network N is a set of nodes in N such that each message, that travels at least K hops in N, is handled (and so observed) by at least one node in P. A K-observer P of a network N is minimum i the number of nodes in P is less than or equal the number of nodes in every K-observer of N. The nodes in a minimum K-observer of a network N can be used to monitor the message traffic in network N and collect traffic statistics, detect patterns of denial-of-service attacks when they occur in N, and act as firewalls to identify and discard attack messages from N. In this paper, we consider the problem of constructing a minimum K-observer for any given network. We show that the problem is NP-hard for general networks, and give linear time algorithms for constructing minimum or near-minimum K-observers for special classes of networks: trees, rings, L-rings, and large grids.

DOWNLOAD TR-2026.pdf


Soo-el Son(1), Vitaly Shmatikov(1). "SAFERPHP: Finding Semantic Vulnerabilities in PHP Applications". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-14 (regular tech report). April 4th, 2011.




Taehwan Choi(1), H. B. Acharya(1), Mohamed G. Gouda(1). "TPP: The Two-Way Password Protocol". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-15 (regular tech report). April 4th, 2011. 6 pages.

The need for secure communication in the Internet has led to the widespread deployment of secure application-level protocols. The current state-of-the-art is to use TLS, in conjunction with a password protocol. The password protocol, which we call a one-way password protocol (OPP), authenticates the client to the server, using a particular secret called the password. TLS has two functions: (1) It allows a user to authenticate the server (2) It ensures secure communication between client and server. The second function effectively provides a secure channel for endto- end communication between client and server. However, the first function is frequently compromised by a variety of Phishing attacks. In this paper, we address this problem by developing a password protocol which we name the Two-way Password Protocol (TPP). TPP, when used in conjunction with TLS, ensures that clients correctly authenticate servers, and are protected from Phishing attacks. The first contribution of this paper is to develop a protocol, called the Universal Password Protocol (UPP), which ensures that a client’s password is kept safe even in the case of a successful Phishing attack. However, it may be noted that a client, after logging in, frequently shares other secrets (such as credit card number) over the secure connection, and UPP cannot protect these. Our second contribution is to build on UPP and develop, first, the Hash-Hash Password Protocol (HPP), and finally an improved version named the Two-Way Password Protocol (TPP), which ensures that both server and client are properly authenticated to each other. This ensures the security of all secrets which should be known only to the client and server, including, of course, the password.

DOWNLOAD TR-2028.pdf


Srinath Setty(1), Andrew J. Blumberg(1), Michael Walfish(1). "Toward practical and unconditional verification of remote computations (supplement)". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-16 (regular tech report). April 5th, 2011.




Don Batory(1), Bryan Marker(1). "Correctness Proofs of the Gamma Database Machine Architecture". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-17 (regular tech report). April 6th, 2011. 15 pages.

The Gamma database machine encodes a classical way in which relational joins are parallelized. In a companion technical report, we derived a model-driven engineering architecture of Gamma as a series of model-to-model transformations. In this report, we give a proof of correctness for each of these transformations. By starting with a correct architecture (a single join component), and applying correct transformations, the resulting architecture (Gamma's parallel join architecture) is correct. Our work is an example of a correct-by-construction approach.

DOWNLOAD TR-2030.pdf


Bryan Marker(1), Andy Terrel(2), Jack Poulson(3), Don Batory(1), Robert van de Geijn(1). "Mechanizing the Expert Dense Linear Algebra Developer". The University of Texas at Austin, Department of Computer Sciences(1), The University of Texas at Austin, Texas Advanced Computing Center(2), The University of Texas at Austin, Institute for Computational Engineering and Sciences(3). Report# TR-11-18 (regular tech report). April 25th, 2011. 17 pages.

Sustained high performance on the fastest computers in the world has traditionally been accomplished by experts who carefully hand-code key routines. This quickly becomes unmanageable for large bodies of software and/or as architectures change enough that entire libraries need to be rewritten. We believe the problem is that a typical library for scientific computing exists as a set of routines that are manually instantiated in a specic programming language. We advocate developing libraries of the future as algorithms and fundamental kernels that encode computation and data movement, together with expert knowledge. From this a tool will synthesize efficient, platform-specic implementations by composing, transforming, and optimizing library algorithms, accomplishing automatically what today's experts do manually. This paper illustrates how this can be achieved for the domain of dense linear algebra and gives preliminary results for the automatic customization and optimization of dense linear algebra algorithms targeting large distributed memory cluster architectures.

DOWNLOAD TR-2032.pdf


Taylor L. Riché(1), Don Batory(1), Rui Gonçalves(2), Bryan Marker(1). "Software Architecture Design by Transformation". The University of Texas at Austin, Department of Computer Sciences(1), Universidade do Minho, Braga, Portugal(2). Report# TR-11-19 (regular tech report). April 29th, 2011.




Edmund L. Wong(1), Isaac Levy(1), Lorenzo Alvisi(1), Allen Clement(1), Mike Dahlin(1). "Regret-freedom isn't free". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-20 (regular tech report). May 2nd, 2011. 15 pages.

Cooperative, peer-to-peer (P2P) services—distributed systems consisting of participants from multiple administrative domains (MAD)—must deal with the threat of arbitrary (Byzantine) failures while incentivizing the cooperation of potentially selfish (rational) nodes that such services rely on to function. Although previous work has generally agreed that these types of participants need to be considered in any formal analysis, there have been differing viewpoints about what the conditions for rational cooperation, in the face of Byzantine failure, need to be. In this paper, we show that regret-freedom, a natural extension of traditional Byzantine fault tolerance that requires optimal choices regardless of how Byzantine failures occur, is unattainable in realistic cooperative services. We argue that protocols should instead aim to be regret-brave: take actions that are in a rational node’s best interest based on some prior expectation of Byzantine failures. We demonstrate that by doing so we can provide strong guarantees without sacrificing real-world viability.

DOWNLOAD TR-2034.pdf


Taylor L. Riché(1), Don Batory(1), Joon-Myung Kang(2). "Transformational Support for Testing Streaming Services". The University of Texas at Austin, Department of Computer Sciences(1), Department of Computer Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea(2). Report# TR-11-21 (regular tech report). May 2nd, 2011.

We design streaming services and streaming applications by incrementally applying transformations. In this paper, we combine this design approach with a common testing technique called capture and replay. Doing so offers the following benefits for testing streaming applications: automatic test inference, reduced number or scope of tests, and reduced resources needed for tests. We review an execution environment that implements our ideas and present three case studies for evaluation.



Prince Mahajan(1), Lorenzo Alvisi(1), Mike Dahlin(1). "Consistency, Availability, and Convergence". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-22 (regular tech report). May 2nd, 2011. 31 pages.

We examine the limits of consistency in fault-tolerant distributed storage systems. In particular, we identify fundamental tradeoffs among properties of {em consistency}, {em availability}, and {em convergence}, and we close the gap between what is known to be impossible (i.e. CAP) and known systems that are highly-available but that provide weaker consistency such as causal. Specifically, in the asynchronous model with omission-failures and unreliable networks, we show the following tight bound: {bf No consistency stronger than {em real time causal consistency (RTC) can be provided in an always-available, one-way convergent system} and {bf RTC can be provided in an always-available, one-way convergent system.} In the asynchronous, Byzantine-failure model, we show that it is impossible to implement many of the recently introduced {em fork}-based consistency semantics without sacrificing either availability or convergence; notably, proposed systems allow Byzantine nodes to permanently partition correct nodes from one another. To address this limitation, we introduce {em bounded fork join causal semantics} that extends causal consistency to Byzantine environments while retaining availability and convergence.

DOWNLOAD TR-2036.pdf


Xiuming Zhu(1), Pei-chi Huang(1), Song Han(1), Aloysius K. Mok(1), Deji Chen(2), Mark Nixon(2). "MinMax: A Sampling Interval Control Algorithm for Process Control Systems". The University of Texas at Austin, Department of Computer Sciences(1), Emerson Process Management(2). Report# TR-11-23 (regular tech report). May 5th, 2011. 12 pages.

The traditional sampling method in process control systems is based on a periodic task model. This is because controllers are executed in a strictly periodic manner. Sensors sample the process data and send it periodically to the appropriate controllers through a communication system such as the fieldbus. Since the fieldbus is shared by multiple sensors, there is some delay (control loop latency) between the sampling and control actions. In order to minimize the control loop latency, a higher than necessary sampling frequency is typically adopted, which results in unnecessary waste of energy. In this paper, we propose MinMax: a sampling interval control algorithm for tackling this problem. In MinMax, sampling tasks are not periodic but have both maximum and minimum distance constraints. This sampling model has advantages that are especially important in the domain of wireless control for industrial automation. We shall then discuss the jitter property of sampling schemes under this model and propose algorithms for controlling the sampling intervals of sensors in terms of the MinMax problem (UMinMax) which we shall introduce. Though this problem is NP-hard in general, even for special case of unit-time tasks, we show how to reduce MinMax to well-studied scheduling models such as Liu and Layland-type periodic models and pinwheel models, at the expense of some loss of schedulability. These reductions allow us to derive efficient schedulability tests that can be used to solve the sampling interval control problem in practice. Simulations are used to compare the performance of different UMinMax schedulers in two key figures of merit: the acceptance ratio and the jitter ratio. Simulation of a process control system model also shows that UMinMax can reduce about 40% of the traffic load on the communication system which is especially important for energy-aware wireless process control applications.

DOWNLOAD TR-2037.pdf


Doo Soon Kim(1), Ken Barker(1), Bruce Porter(1). "Delaying ambiguity resolution in the pipeline architecture to avoid aggressive pruning". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-24 (regular tech report). May 14th, 2011. 6 pages.

Text interpretation involves several component tasks, including parsing, word-sense disambiguation and semantic relation assignment. To combine these tasks, the pipeline architecture has been widely used. Components for each task are connected serially and each one passes a {\it single} solution to the next component in line. This architecture, however, has a serious disadvantage that can significantly reduce interpretation accuracy: the components are required to commit to a single interpretation, even in the absence of sufficient evidence to resolve ambiguity. To address this problem, we present a method that allows the system to delay ambiguity resolution in the pipeline architecture until sufficient evidence accrues. Our approach allows the system to maintain multiple candidate interpretations during the interpretation. Then, the system chooses the best overall semantic representation at the end of the pipeline, aided by an external knowledge resource, OntoNote. A key element in our approach is a novel representation, {\it the packed representation}, which allows efficient management of a myriad of candidate interpretations. We show experimentally that our approach produces better semantic representations of text than the traditional pipelined system.

DOWNLOAD TR-2039.pdf


Benjamin Delaware(1), William R. Cook(1), Don Batory(1). "Theorem Proving for Product Lines". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-25 (regular tech report). May 15th, 2011.




David Pardoe(1). "Adaptive Trading Agent Strategies Using Market Experience". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-26 (dissertation). May 16th, 2011. 348 pages.

Along with the growth of electronic commerce has come an interest in developing autonomous trading agents. Often, such agents must interact directly with other market participants, and so the behavior of these participants must be taken into account when designing agent strategies. One common approach is to build a model of the market, but this approach requires the use of historical market data, which may not always be available. This dissertation addresses such a case: that of an agent entering a new market in which it has no previous experience. While the agent could adapt by learning about the behavior of other market participants, it would need to do so in an online fashion. The agent would not necessarily have to learn from scratch, however. If the agent had previous experience in similar markets, it could use this experience to tailor its learning approach to its particular situation. This dissertation explores methods that a trading agent could use to take advantage of previous market experience when adapting to a new market. Two distinct learning settings are considered. In the first, an agent acting as an auctioneer must adapt the parameters of an auction mechanism in response to bidder behavior, and a reinforcement learning approach is used. The second setting concerns agents that must adapt to the behavior of competitors in two scenarios from the Trading Agent Competition: supply chain management and ad auctions. Here, the agents use supervised learning to model the market. In both settings, methods of adaptation can be divided into four general categories: i) identifying the most similar previously encountered market, ii) learning from the current market only, iii) learning from the current market but using previous experience to tune the learning algorithm, and iv) learning from both the current and previous markets. The first contribution of this dissertation is the introduction and experimental validation of a number of novel algorithms for market adaptation fitting these categories. The second contribution is an exploration of the degree to which the quantity and nature of market experience impact the relative performance of methods from these categories.

DOWNLOAD TR-2041.pdf


Jad Naous(1), Michael Walfish(2), Antonio Nicolosi(3), David Mazieres(2), Michael Miller(2), Arun Seehra(2). "Panda: Verifying and enforcing network paths". Stanford University(1), The University of Texas at Austin, Department of Computer Sciences(2), Stevens Institute of Technology(3). Report# TR-11-27 (regular tech report). June 17th, 2011. 14 pages.

We describe a new networking primitive, called a Path Verification Mechanism (PVM). There has been much recent work about how senders and receivers _express_ policies about the paths that their packets take. For in stance, users may want to choose providers whom they trust to be discreet, or a receiver may want traffic destined to it to travel through an intrusion detection service. While the ability to express policies has been well-studied, the ability to _enforce_ policies has not. The core challenge is: if we assume an adversarial, decentralized, and high-speed environment, then when a packet arrives at a node, how can the node be sure that the packet followed an approved path? Our solution, Panda, incorporates an optimized cryptographic construction that is compact, and requires negligible configuration state and no PKI. We demonstrate Panda's plausibility with a NetFPGA hardware implementation. At 86% more costly than an IP router on the same platform, its cost is significant but affordable. Indeed, our evaluation suggests that Panda can scale to backbone speeds.

DOWNLOAD TR-2042.pdf


Juan Sequeda(1), Marcelo Arenas(2), Daniel P. Miranker(1). "On Directly Mapping Relational Databases to RDF and OWL (extended version)". The University of Texas at Austin, Department of Computer Sciences(1), Pontificia Universidad Catolica de Chile, Department of Computer Science(2). Report# TR-11-28 (regular tech report). June 21st, 2011. 21 pages.

Mapping relational databases to RDF is a fundamental problem for the development of the Semantic Web. We present a solution, inspired by draft methods defined by the W3C where relational databases are directly mapped to RDF and OWL. Given a relational database schema and its integrity constraints, this direct mapping produces an OWL ontology, which, in turn provides the basis for generating RDF instances. The semantics of this mapping is defined using Datalog. Four fundamental properties of such mappings are monotonicity, infor- mation preservation, query preservation and semantics preservation. We prove that our mapping is monotone, information preserving and query preserving, We also prove that no monotone mapping, including ours, is semantic preserving. We realize that monotonicity is an obstacle and thus present a non-monotone direct mapping that is semantics preserving. Additionally, we foresee the existence of monotone direct mappings that are semantics preserving if OWL is extended with the epistemic operator.

DOWNLOAD TR-2043.pdf


Ben Wiedermann(1). "Integrating Programming Languages and Databases via Program Analysis and Language Design". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-29 (dissertation). June 29th, 2011. 186 pages.

Researchers and practitioners alike have long sought to integrate programming languages and databases. Today's integration solutions focus on the data-types of the two domains, but today's programs lack transparency. A transparently persistent program operates over all objects in a uniform manner, regardless of whether those objects reside in memory or in a database. Transparency increases modularity and lowers the barrier of adoption in industry. Unfortunately, fully transparent programs perform so poorly that no one writes them. The goal of this dissertation is to increase the performance of these programs to make transparent persistence a viable programming paradigm. This dissertation contributes two novel techniques that integrate programming languages and databases. Our first contribution -- called query extraction -- is based purely on program analysis. Query extraction analyzes a transparent, object-oriented program that retrieves and filters collections of objects. Some of these objects may be persistent, in which case the program contains implicit queries of persistent data. Our interprocedural program analysis extracts these queries from the program, translates them to explicit queries, and transforms the transparent program into an equivalent one that contains the explicit queries. Query extraction enables programmers to write programs in a familiar, modular style and to rely on the compiler to transform their program into one that performs well. Our second contribution -- called RBI-DB+ -- is an extension of a new programming language construct called a batch block. A batch block provides a syntactic barrier around transparent code. It also provides a latency guarantee: If the batch block compiles, then the code that appears in it requires only one client-server communication trip. Researchers previously have proposed batch blocks for databases. However, batch blocks cannot be modularized or composed, and database batch blocks do not permit programmers to modify persistent data. We extend database batch blocks to address these concerns and formalize the results. Today's technologies integrate the data-types of programming languages and databases, but they discourage programmers from using procedural abstraction. Our contributions restore procedural abstraction's use in enterprise applications, without sacrificing performance. We argue that industry should combine our contributions with data-type integration. The result would be a robust, practical integration of programming languages and databases.

DOWNLOAD TR-2044.pdf


Chinmayi Krishnappa(1), C. Greg Plaxton(1). "A Dynamic Unit-Demand Auction with Bid Revision and Sniping Fees". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-30 (regular tech report). June 30th, 2011. 12 pages.

We present a dynamic unit-demand auction that supports arbitrary bid revision. Each round of the dynamic auction takes a tentative allocation and pricing as part of the input, and allows each bidder --- including a tentatively allocated bidder --- to submit an arbitrary unit-demand bid. We establish strong properties of the dynamic auction related to truthfulness and efficiency. Using a certain privacy preservation property of each round of the auction, we show that the overall dynamic auction is highly resistant to shilling. We present a fast algorithm for implementing the proposed auction. Using this algorithm, the amortized cost of processing each bidding operation is upper bounded by the complexity of solving a single-source shortest paths problem on a graph with nonnegative edge weights and a node for each item in the auction. We propose a dynamic price adjustment scheme that discourages sniping by providing incentives to bid early in the auction.

DOWNLOAD TR-2045.pdf


Dimitrios Prountzos(1), Keshav Pingali(1). "An Asynchronous Parallel Algorithm for Betweeness Centrality". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-31 (regular tech report). August 22nd, 2011.




Srinath Setty(1), Richard McPherson(1), Andrew J. Blumberg(1), Michael Walfish(1). "Making argument systems for outsourced computation practical (sometimes)". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-32 (regular tech report). August 25th, 2011. 22 pages.

We describe the design, implementation, and evaluation of a system for performing unconditionally verifiable outsourced computation. It has long been known that (1) this problem can be solved in theory using probabilistically checkable proofs (PCPs) coupled with modern cryptographic tools, and (2) these solutions have wholly impractical performance, according to the conventional (and well-founded) wisdom. Our goal is to challenge (2), with a built system that implements an argument system based on PCPs. We describe a general-purpose system that builds on work of Ishai et al. (CCC '07) and incorporates new theoretical work to improve performance by 20 orders of magnitude. The system is (arguably) practical in some cases, suggesting that, as a tool for building secure systems, PCPs are not a lost cause.

DOWNLOAD TR-2048.pdf


Don Batory(1), C. T. Shepherd(1). "Product Lines of Product Lines". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-33 (regular tech report). September 28th, 2011. 12 pages.

A Software Product Line (SPL) is a set of related programs. A SPL of a SPL (SPL^2), or a product line of a product line, is a fundamental scaling of product line concepts. A member of a SPL^2 is not a program, but a SPL. The idea recurses: a SPL^3 is a SPL of a SPL^2, and so on. The depth of recursion is the rank of the product line. We present an algebra to understand and build feature-based product lines of arbitrary rank that is consistent with our experiences in building several SPLs of rank two and higher.

DOWNLOAD TR-2049.pdf


Amitanand Aiyer(1), Lorenzo Alvisi(2), Allen Clement(3), Alessandro Epasto(4), Silvio Lattanzi(5), Alessandro Panconesi(4). "Social networks and the local nature of Sybil defense". Facebook(1), The University of Texas at Austin, Department of Computer Sciences(2), MPI SWS(3), Sapienza University, Rome(4), Google, Inc(5). Report# TR-11-34 (regular tech report). January 31st, 2011. 14 pages.

This paper explores the fundamental limits of using only the structure of social networks to defend against sybil attacks. We derive the number of attack edges needed to foil sybil defenses based on each of the known statistical properties of social graphs: our results suggest that it may be impossible to use these properties to identify with high probability both sybil and honest nodes. We then explore a more modest goal for social defenses: to use local graph properties, such as community structure, to provide an honest node with a way to white-list a set of trustworthy nodes. While recent literature claims that any community detection algorithm may prove equally useful, we find that the choice of community detection protocol can dramatically affect whether sybil nodes are successfully kept from being white-listed. We demonstrate both analytically and experimentally that Local Page Rank can very effectively exploit a community's local properties to keep sybil nodes at bay.

DOWNLOAD TR-2050.pdf


Song Han(1), Kam-yiu Lam(2), Jiantao Wang(2), Sang H. Son(3), Aloysius K. Mok(1). "Adaptive Co-Scheduling for Periodic Update and User Transactions in Real-Time Database Systems". The University of Texas at Austin, Department of Computer Sciences(1), City University of Hong Kong(2), University of Virginia(3). Report# TR-11-35 (regular tech report). October 12th, 2011. 14 pages.

In this paper, we study the co-scheduling problems of periodic user transactions and update transactions in real-time database systems for surveillance of critical events. To perform the surveillance functions effectively, it is important to meet the deadlines of all the periodic user transactions and at the same time to maintain the quality of the real-time data objects accessed by them. Unfortunately, these two goals are conflicting and difficult to be achieved at the same time. To address the co-scheduling problem, we propose a real-time co-scheduling algorithm, called Adaptive Earliest Deadline First Co-Scheduling (AEDF-Co) in which we adopt a dynamic scheduling to adaptively schedule the update and user jobs based on their deadlines. The performance goal of AEDF-Co is that for given sets of periodic user and update transactions, we determine a schedule such that the deadlines of all the user transactions can be satisfied and at the same time the quality of the real-time data objects are maximized. Extensive simulation experiments are performed to evaluate the performance of AEDF-Co. The results show that adaptively adjusting the release times of update jobs and schedule the update and user jobs using a dynamic scheduling algorithm, AEDF-Co can effectively in achieving the performance goal and maximize the system performance.

DOWNLOAD TR-2051.pdf


Field G. Van Zee(1), Robert A. van de Geijn(1), Gregorio Quintana-Orti(2). "Restructuring the QR Algorithm for High-Performance Application of Givens Rotations". The University of Texas at Austin, Department of Computer Sciences(1), Universidad Jaume I - Spain, Departamento de Ingenieria y Ciencia de Computadores(2). Report# TR-11-36 (regular tech report). October 18th, 2011. 44 pages.

We show how the QR algorithm can be restructured so that it becomes rich in operations that can achieve near-peak performance on a modern processor. The key is a novel algorithm for applying multiple sets of Givens rotations. We demonstrate the merits of this new QR algorithm for computing the Hermi- tian (symmetric) eigenvalue decomposition and singular value decomposition of dense matrices when all eigenvectors/singular vectors are computed. The approach yields vastly improved performance relative to the traditional QR algorithm and is competitive with two commonly used alternatives—Cuppen’s Divide and Conquer algorithm and the Method of Multiple Relatively Robust Representations—while inheriting the more modest O(n) workspace requirements of the original QR algorithm. Since the computations performed by the restructured algorithm remain essentially identical to those performed by the original method, robust numerical properties are preserved.

DOWNLOAD TR-2052.pdf


Nagarajan Natarajan(1), U. Martin Blom(2), Ambuj Tewari(1), John O. Woods(3), Inderjit S. Dhillon(1), Edward M. Marcotte(4). "Predicting gene-phenotype associations using multiple species". The University of Texas at Austin, Department of Computer Sciences(1), Center for Systems and Synthetic Biology, Institute for Cellular and Molecular Biology, University of Texas and Program in Computational and Applied Mathematics, University of Texas at Austin(2), Center for Systems and Synthetic Biology, Institute for Cellular and Molecular Biology, University of Texas at Austin(3), Center for Systems and Synthetic Biology, Institute for Cellular and Molecular Biology, University of Texas and Department of Chemistry and Biochemistry, University of Texas at Austin(4). Report# TR-11-37 (regular tech report). October 20th, 2011. 14 pages.

Correctly predicting genes associated with hereditary diseases is a ?rst step to understanding the molecular mechanisms that lead to these diseases, and in the long run, to developing effective remedies for them. In this work, we combine the power of the functional gene-gene interaction networks with the phenotypic information from multiple species in a walk-based framework, and use a novel machine learning formulation called PU learning to infer the weights for walks; we do so by deriving features from walks in a combined network consisting of all our information sources. We evaluate our methods on a number of diseases downloaded from the Online Mendelian Inheritance in Man (OMIM) project. We demonstrate high recall for known diseases by cross-validation, and show that PU learning based methods using walk-based features outperform a state-of-the-art method that uses a similar walk-based framework.

DOWNLOAD TR-2053.pdf


Song Han(1), Deji Chen(2), Ming Xiong(3), Kam-yiu Lam(4), Aloysius K. Mok(1), Krithi Ramamritham(5). "Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness". The University of Texas at Austin, Department of Computer Sciences(1), Emerson Process Management(2), Google(3), CityU of Hong Kong(4), IIT Bombay(5). Report# TR-11-38 (regular tech report). November 3rd, 2011. 14 pages.

Although the deferrable scheduling algorithm for fixed priority transactions (DS-FP) has been shown to provide a better performance comparing with the More-Less (ML) method, it is still lack of any comprehensive studies on the necessary and sufficient conditions for the schedulability of DS-FP. In this paper, we first analyze the necessary and sufficient schedulability conditions for DS-FP, and then propose a schedulability test algorithm for DS-FP by exploiting the fact that there always exists a repeating pattern in a DS-FP schedule. Instead of using fixed priority scheduling as adopted in DS-FP, we extend the deferrable scheduling to be a dynamic priority scheduling algorithm called DS-EDF by applying the earliest deadline first (EDF) policy to schedule update jobs. We propose a schedulability test algorithm for DS-EDF and compare its performance with DS-FP and ML through extensive simulation experiments. The results show that the schedulability test algorithms are effective. Although the schedulability of DS-EDF is lower than DS-FP and the length of repeating patterns in DS-EDF schedules are longer than those in DS-FP due to the use of dynamic priority scheduling, the performance of DS-EDF is better than both DS-FP and ML in terms of CPU utilization and impacts to lower priority application transactions.

DOWNLOAD TR-2055.pdf


Andrew Lenharth(1), Donald Nguyen(2), Keshav Pingali(2). "Priority Queues Are Not Good Concurrent Priority Schedulers". The University of Texas at Austin, Institute for Computational Engineering and Sciences(1), The University of Texas at Austin, Department of Computer Sciences(2). Report# TR-11-39 (regular tech report). November 18th, 2011. 10 pages.

The problem of \emph{priority scheduling} arises in many algorithms. There is a dynamic pool of tasks that can be executed in any order, but some execution orders are more efficient than others. To exploit such schedules, each task is associated with an application-specific priority, and a system must schedule tasks to minimize priority inversions and scheduling overheads. Parallel implementations of priority scheduling tend to use concurrent priority queues. We argue that this obvious solution is not a good one. Instead, we propose an alternative implementation that exploits a fact not exploited by priority queues: algorithms amenable to priority scheduling can tolerate some amount of priority inversion. To evaluate this scheduler, we used it to implement six complex irregular benchmarks. Our results show that our end-to-end performance is 8--40 times better than the performance obtained by using state-of-the-art concurrent priority queues in the Intel TBB library.

DOWNLOAD TR-2056.pdf


Juhyun Lee(1). "Robust Color-based Vision for Mobile Robots". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-40 (dissertation). November 30th, 2011. 153 pages.

An intelligent agent operating in the real world needs to be fully aware of the surrounding environment to make the best decision possible at any given point of time. There are many forms of input devices for a robot that gather real-time information of the surroundings, such as video cameras, laser/sonar range finders, and GPS to name a few. In this thesis, a vision system for a mobile robot navigating through different illumination conditions is investigated. Many state-of-the-art object recognition algorithms employ methods running on grayscale images, because using color is difficult for several reasons: (a) The object-of-interest's true colors may not be recorded by the camera hardware due to illumination artifacts, and (b) colors are often too ambiguous to be a robust visual descriptor of an object. In this dissertation, we address these two challenges and present new color-based vision algorithms for mobile robots that are robust and efficient. The first part of this dissertation focuses on the problem of color constancy for mobile robots under different lighting conditions. Specifically, We use a generate-and-test methodology to evaluate which simulated global illumination condition leads to the generated view that most closely matches what the robot actually sees. We assume the diagonal color model when generating views of the object of interest under previously unseen conditions. In the second part of the dissertation, we present a vision framework for mobile robots that enables observation of illumination artifacts in a scene and reasoning about the lighting conditions to achieve robust color-based object tracking. Before implementing this framework, we first devise a novel vision-based localization correction algorithm with graphics hardware support, and present how to find possibly shaded regions in the recorded scene by using techniques from 3D computer graphics. We then demonstrate how to integrate a color-based object tracker from the first part of this dissertation with our vision framework. Even with the contributions from the first two parts of the dissertation, there remains some degree of uncertainty in robot's assessment of an object's true color. The final part of this dissertation introduces a novel algorithm to overcome this uncertainty in color-based object tracking. We show how an agent can achieve robust color-based object tracking by combining multiple different visual characteristics of an object for more accurate robot vision in the real world.

DOWNLOAD TR-2062.pdf


Shivaram Kalyanakrishnan(1). "Learning Methods for Sequential Decision Making with Imperfect Representations". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-41 (dissertation). December 1st, 2011. 346 pages.

Sequential decision making from experience, or reinforcement learning (RL), is a paradigm that is well-suited for agents seeking to optimize long-term gain as they carry out sensing, decision, and action in an unknown environment. RL tasks are commonly formulated as Markov Decision Problems (MDPs). Learning in finite MDPs enjoys several desirable properties, such as convergence, sample-efficiency, and the ability to realize optimal behavior. Key to achieving these properties is access to a perfect representation, under which the state and action sets of the MDP can be enumerated. Unfortunately, RL tasks encountered in the real world commonly suffer from state aliasing, and nearly always they demand generalization. As a consequence, learning in practice invariably amounts to learning with imperfect representations. In this dissertation, we examine the effect of imperfect representations on different classes of learning methods, and introduce techniques to improve their practical performance. We make four main contributions. First we introduce ``parameterized learning problems'', a novel experimental methodology facilitating the systematic control of representational aspects such as state aliasing and generalization. Applying this methodology, we compare the class of on-line value function-based (VF) methods with the class of policy search (PS) methods. Results indicate clear patterns in the effects of representation on these classes of methods. Our second contribution is a deeper analysis of the limits imposed by representations on VF methods; specifically we provide a plausible explanation for the relatively poor performance of these methods on Tetris, the popular video game. The third major contribution of this dissertation is a formal study of the ``subset selection'' problem in multi-armed bandits. This problem, which directly affects the sample-efficiency of several commonly-used PS methods, also finds application in areas as diverse as industrial engineering and on-line advertising. We present new algorithms for subset selection and bound their performance under different evaluation criteria. Under a PAC setting, our sample complexity bounds indeed improve upon existing ones. As its fourth contribution, this dissertation introduces two hybrid learning architectures for combining the strengths of VF and PS methods. Under one architecture, these methods are applied in sequence; under the other, they are applied to separate components of a compound task. We demonstrate the effectiveness of these methods on a complex simulation of robot soccer. In sum, this dissertation makes philosophical, analytical, and methodological contributions towards the development of robust and automated learning methods for sequential decision making with imperfect representations.

DOWNLOAD TR-2063.pdf


Elben Shira(1), Matthew Lease(1). "Expert Search on Code Repositories". The University of Texas at Austin, Department of Computer Sciences(1). Report# TR-11-42 (regular tech report). December 19th, 2011. 8 pages.

We consider a new expert search task: given a code base, its history, and a block of code as a query, who are experts to consult if we have a question about this block of code? We discuss conditions under which such search would be useful and methodological implications for assessing expertise in this scenario. We then propose several ranking functions and describe a pilot evaluation on an active code repository.

DOWNLOAD TR-2066.pdf


Adam Dziuk(1). "Creating Intellegent Agents through Shaping of Coevolution". The University of Texas at Austin, Department of Computer Sciences(1). Report# HR-11-01 (honors theses). February 9th, 2011. 16 pages.

Producing agents that are e ective in video game and video game-like environments is a di cult task for which powerful solutions are needed. One such solution, shaping, has been applied to standard evolutionary algorithms in the past in environments such as the NERO video game, and had considerable success. Another solution, coevolution has been shown to be a promising approach, but has been di cult to control. In this paper, the shaping approach is applied to coevolution in an attempt to enhance the already powerful e ects of coevolution in competetive environments. Several example automated shaping methods are introduced and tested in a simple capture-the-ag-like environment using the Open-NERO research platform and rtNEAT for neuroevolution. This paper demonstrates that each of these shaping methods is superior to a control, and further analyzes the results in order to determine general rules for shaping coevolution. While di erent methods of shaping apply di erently to di erent environments, using the approach detailed here, it should be possible to use coevolution to create intelligent agents in a variety of situation

DOWNLOAD TR-2015.pdf


Matthew Christiaan de Wet(1). "Avoiding Premature Convergence in NeuroEvolution by Broadening the Evolutionary Search". The University of Texas at Austin, Department of Computer Sciences(1). Report# HR-11-02 (honors theses). March 9th, 2011. 63 pages.

This paper introduces a novel, general-purpose neuroevolution method which is named Broadened neuroEvolutionary Algorithm for Searching Topologies (BEAST). The goal of this algorithm is to solve premature convergence in evolution in dynamic problems or environments. Premature convergence is one of the major causes of failure in evolutionary artificial neural networks (EANNs), since it slows or halts evolution when local optima or changes to the objective function are present. BEAST combines several diff erent techniques to reduce the e ffects of convergence, each of which targets one or more of its major causes. The most radical of these techniques applies a meta-search to direct evolution from a higher level than previous approaches. This thesis tests the performance of BEAST against convergence in three major types of dynamic problems.

DOWNLOAD TR-2020.pdf


Andy Luong(1). "Reconstructing a Fragmented Face from an Attacked Secure Identication Protocol". The University of Texas at Austin, Department of Computer Sciences(1). Report# HR-11-03 (honors theses). May 9th, 2011. 55 pages.

Secure facial identification systems compare an input face to a protected list of subjects. If this list were to be made public, there would be a severe privacy/confidentiality breach. A common approach to protect these lists of faces is to store a representation (descriptor or vector) of the face that is not directly mappable to its original form. In this thesis, we consider a recently developed secure identification system, Secure Computation of Face Identification (SCiFI), that utilizes an index-based facial vector to discretely compress the representation of a face image. A facial descriptor of this system does not allow for a complete reverse mapping. However, we show that if a malicious user is able to obtain a facial descriptor, it is possible that he/she can reconstruct an identifiable human face. We develop a novel approach to initially assemble the information given by the SCiFI protocol to create a fragmented face. This image has large missing regions due to SCiFI's facial representation. Thus, we estimate the missing regions of the face using an iterative Principal Component Analysis (PCA) technique. This is done by first building a face subspace based on a public set of human faces. Then, given the assembled image from the SCiFI protocol, we iteratively project this image in and out of the subspace to obtain a complete human face. Traditionally, PCA reconstruction techniques have been used to estimate very small or specific occluded regions of a face image; these techniques have also been used in facial recognition such as through a k-nearest neighbor approach. However, in our new method, we use it to synthesize 60% to 80% of a human face for facial identification. We explore novel methods of comparing images from different subspaces using metric learning and other forms of facial descriptors. We test our reconstruction with face identification tasks given to a human and a computer. Our results show that our reconstructions are consistently more informative than what is extracted from the SCiFI facial descriptor alone. In addition, these tasks show that our reconstructions are identifiable by humans and computers. The success of our approach implies that a malicious attacker could expose the faces on a registered database.

DOWNLOAD TR-2038.pdf


Michael Miller(1). "PoComON: A POlicy-COMpliant Overlay Network". The University of Texas at Austin, Department of Computer Sciences(1). Report# HR-11-04 (honors theses). October 31st, 2011. 22 pages.

On today’s commodity internet, senders have no control over the path that their packets take beyond the first hop. This aspect of today’s in- ternet prevents many potential network policies from being implemented. Source routing has been proposed as a mechanism to solve this prob- lem, in which the sender chooses the path the traffic will take. How- ever, previous approaches have either not enforced policies specified by the sender, or have been prohibitively expensive in terms of computing power. icing-pvm addresses these concerns by efficiently implementing source-based routing in a manner which allows path preferences to be enforced. PoComON builds on top of icing-pvm, and provides a transi- tional overlay network to deploy icing-pvm on today’s internet supporting legacy applications.

DOWNLOAD TR-2054.pdf

For help please contact