The University of Texas at Austin

Masters Theses

TR-05-40

Hall, Brandon. "Slot Scheduling: General-Purpose Multiprocessor Scheduling for Heterogeneous Workloads". The University of Texas at Austin, Department of Computer Sciences. Report# TR-05-40 (masters thesis). September 6, 2005. 61 pages.

This thesis presents slot scheduling, a approach to general-purpose CPU scheduling for multiprocessor systems. The chief virtues of slot scheduling are versatility (the ability to support a broad range of classes of schedulers) and intelligibility (allowing system designers to easily reason about interactions among different kinds of schedulers). In particular, slot scheduling is well-suited for meeting the needs of both real-time and gang-scheduled applications. These characteristics distinguish slot scheduling from existing scheduler proposals which tend to suffer from overspecialization and complexity. Slot scheduling achieves its goals by employing an explicit ``bulletin board'' representation of CPU allocation that decentralizes the task of scheduling from the kernel and permits scheduling logic to be carried out by applications themselves. We show that this approach is both feasible and efficient.

DOWNLOAD tr05-40.pdf

TR-01-51

Sastry, Nishanth R. "Application Specific Unicast Congestion Control". The University of Texas at Austin, Department of Computer Sciences. Report# TR-01-51 (masters). December 2001. 44 pages.

Some new Internet-based applications such as streaming media cannot function well with TCP's decrease-by-half response to congestion indications from the network. However, to prevent congestion collapse, it is imperative that all applications use some form of congestion control. Furthermore, for such a mechanism to be feasible, it must converge to fairness and efficiency, just as TCP does. This thesis presents a framework that allows an application to choose an aggressive or smooth congestion response from a wide family of window-based protocols called CYRF (for Choose Your Response Function) that are designed to converge to fairness. We also give a simple rule for smooth CYRF flows to be TCP-friendly. We first derive a sufficient, condition that ensures convergence to fairness. We then present the surprising result that an application can satisfy this fairness condition arid construct a congestion response tailored to its needs by choosing from almost any pair of monotonically non-decreasing functions. Constructing a window increase policy rising a slowly increasing function results in an aggressive protocol. Similarly. a slowly increasing function in the decrease policy gives rise to smooth protocols. We characterize TCP-friendliness in steady-state and show that any smooth CYRF protocol can be TCP-friendly if the product of its window increase and decrease functions is proportional to the window size. An interesting aspect of this work is that all commonly used window-based protocols such as TCP, GAIMD, and binomial congestion control are shown to be special cases of a single family of protocols, thus providing a powerful unified framework for analyzing them. We derive most of the important results about these protocols as special cases of the results for CYRF.

DOWNLOAD tr01-51.pdf

TR-01-20

Chandra, Bharat Baddepudi. "Web Workloads Influencing Disconnected Service Access (Masters Thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# TR-01-20 (masters). May 2001. 82 pages.

Disconnected operation, in which a client accesses a service without relying on network connectivity, is crucial for improving availability, supporting mobility, and providing responsive performance. Because many web services are not cachable, disconnected access to web services may require mobile service code to execute in client caches. Furthermore as web workloads access large amounts of data, disconnected access must require prefetching data that will later be used on demand. Unfortunately, this can significantly increase the total amount of data fetched by a service. In this thesis we present an argument thaat aggressive prefetching is feasible. A study of the web workload characteristics at typical clients suggests a need for a flexible, automated resource management system to prevent denial of service attacks by these potentially unreliable mobile service codes that prefetch at the clients. We therefore present and evaluate a Popularity based resource management policy for such an environment.

DOWNLOAD tr01-20.pdf

TR-01-18

Hanson, Heather. "Static Energy Reduction Techniques in Microprocessor Caches". The University of Texas at Austin, Department of Computer Sciences. Report# TR-01-18 (masters). May 2001. 52 pages.

Managing power and energy consumption has become a primary consideration for microprocessor design. This report examines the effect of technology scaling on static power and energy dissipation and evaluates three techniques to reduce static energy in primary and secondary microprocessor caches. We examine the energy and performance tradeoffs associated with each technique and represent the leakage-reduction configurations that minimize the energy-delay product. Our experimental results show that in the best case, the energy-delay product is reduced by 2% in the level-1 instruction cache, 7% in the level-1 data cache, and a factor of 50 in the level-2 unified cache. This technical report is an updated edition of a Masters Report submitted in May, 2001 by Heather Hanson to the Department of Electrical and Computer Engineering at The University of Texas at Austin.

DOWNLOAD tr01-18.pdf

AI98-267

Kumar, Shailesh. "Confidence based Dual Reinforcement Q-Routing: an On-line Adaptive Network Routing Algorithm". The University of Texas at Austin, Department of Computer Sciences. Report# AI98-267 (masters). May 1998. 95 pages.

Efficient routing of information packets in dynamically changing communication networks requires that as the load levels, traffic patterns and topology of the network change, the routing policy also adapts. Making globally optimal routing decisions would require a central observer/controller with complete information about the state of all nodes and links in the network, which is not realistic. Therefore, the routing decisions must be made locally by individual nodes (routers) using only local routing information. The routing information at a node could be estimates of packet delivery time to other nodes via its neighbors or estimates of queue lengths of other nodes in the network. An adaptive routing algorithm should efficiently explore and update routing information available at different nodes as it routes packets. It should continuously evolve efficient routing policies with minimum overhead on network resources. In this thesis, an on-line adaptive network routing algorithm called {\sc Confidence-based Dual Reinforcement Q-Routing} ({\sc CDRQ-routing}), based on the Q learning framework, is proposed and evaluated. In this framework, the routing information at individual nodes is maintained as Q value estimates of how long it will take to send a packet to any particular destination via each of the node's neighbors. These Q values are updated through exploration as the packets are transmitted. The main contribution of this work is the faster adaptation and the improved quality of routing policies over the {Q}-Routing. The improvement is based on two ideas. First, the quality of exploration is improved by including a confidence measure with each Q value representing how reliable the Q value is. The learning rate is a function of these confidence values. Secondly, the quantity of exploration is increased by including backward exploration into Q learning. As a packet hops from one node to another, it not only updates a Q value in the sending node (forward exploration similar to {Q}-Routing), but also updates a Q value in the receiving node using the information appended to the packet when it is sent out (backward exploration). Thus two Q value updates per packet hop occur in {CDRQ}-Routing as against only one in {\sc Q-routing}. Certain properties of forward and backward exploration that form the basis of these update rules are stated and proved in this work. Experiments over several network topologies, including a 36 node irregular grid and 128 node 7-D hypercube, indicate that the improvement in quality and increase in quantity of exploration contribute in complementary ways to the performance of the overall routing algorithm. {CDRQ}-Routing was able to learn optimal shortest path routing at low loads and efficient routing policies at medium loads almost twice as fast as {Q}-Routing. At high load levels, the routing policy learned by {CDRQ}-Routing was twice as good as that learned by {Q}-Routing in terms of average packet delivery time. {CDRQ}-Routing was found to adapt significantly faster than {Q}-Routing to changes in network traffic patterns and network topology. The final routing policies learned by {CDRQ}-Routing were able to sustain much higher load levels than those learned by {Q}-Routing. Analysis shows that exploration overhead incurred in {CDRQ}-Routing is less than 0.5\% of the packet traffic. Various extensions of {CDRQ}-Routing namely, routing in heterogeneous networks (different link delays and router processing speeds), routing with adaptive congestion control (in case of finite queue buffers) and including predictive features into {CDRQ}-Routing have been proposed as future work. {CDRQ}-Routing is much superior and realistic than the state of the art distance vector routing and the {Q}-Routing algorithm.

DOWNLOAD UT-AI-TR-98-267.pdf

AI98-268

Prior, John William. "Eugenic Evolution for Combinatorial Optimization". The University of Texas at Austin, Department of Computer Sciences. Report# AI98-268 (masters). May 1998. 126 pages.

In the past several years, evolutionary algorithms such as simulated annealing and the genetic algorithm have received increasing recognition for their ability to optimize arbitrary functions. These algorithms rely on the process of Darwinian evolution, which promotes highly successful solutions that result from random variation. This variation is produced by the random operators of mutation and/or recombination. These operators make no attempt to determine which alleles or combinations of alleles are most likely to yield overall fitness improvement. This thesis will explore the benefits that can be gained by utilizing a direct analysis of the correlations between fitness and alleles or allele combinations to intelligently and purposefully design new highly-fit solutions. An algorithm is developed in this thesis that explicitly analyzes allele-fitness distributions and then uses the information gained from this analysis to purposefully construct new individuals ``bit by bit''. Explicit measurements of ``gene significance'' (the effect of a particular gene upon fitness) allows the algorithm to adaptively decide when conditional allele-fitness distributions are necessary in order to correctly track important allele interactions. A new operator---the ``restriction'' operator---allows the algorithm to simply and quickly compute allele selection probabilities using these conditional fitness distributions. The resulting feedback from the evaluation of new individuals is used to update the statistics and therefore guide the creation of increasingly better individuals. Since explicit analysis and creation is used to guide this evolutionary process, it is not a form of Darwinian evolution. It is a pro-active, contrived process that attempts to intelligently create better individuals through the use of a detailed analysis of historical data. It is therefore a eugenic evolutionary process, and thus this algorithm is called the ``Eugenic Algorithm'' (EuA). The EuA was tested on a number of benchmark problems (some of which are NP-complete) and compared to widely recognized evolutionary optimization techniques such as simulated annealing and genetic algorithms. The results of these tests are very promising, as the EuA optimized all the problems at a very high level of performance, and did so much more consistently than the other algorithms. In addition, the operation of EuA was very helpful in illustrating the structure of the test problems. The development of the EuA is a very significant step to statistically justified combinatorial optimization, paving the way to the creation of optimization algorithms that make more intelligent use of the information that is available to them. This new evolutionary paradigm, eugenic evolution will lead to faster and more accurate combinatorial optimization and to a greater understanding of the structure of combinatorial optimization problems.

DOWNLOAD UT-AI-TR-98-268.pdf

AI97-259

James A. Bednar. "Tilt Aftereffects in a Self-Organizing Model of the Primary Visual Cortex". The University of Texas at Austin, Department of Computer Sciences. Report# AI97-259 (masters). January 1997. 104 pages.

The.pdfychological phenomenon known as the tilt aftereffect was used to demonstrate the functional properties of RF-LISSOM, a self-organizing model of laterally connected orientation m.pdf in the primary visual cortex. The same self-organizing processes that are responsible for the development of the map and its lateral connections are shown to result in tilt aftereffects as well. The model allows analysis of data that are difficult to measure in humans, thus providing a view of the cortex that is otherwise not available. The results give computational support for the idea that tilt aftereffects arise from lateral interactions between adapting feature detectors, as has long been surmised. They also suggest that indirect tilt aftereffects could result from the conservation of synaptic resources. The model thus provides a unified computational explanation of self-organization and both direct and indirect tilt aftereffects in the primary visual cortex.

DOWNLOAD UT-AI-TR-97-259.tar

TR-96-31

McGuire, Tommy Marcus. "Implementing Abstract Protocols in C". The University of Texas at Austin, Department of Computer Sciences. Report# TR-96-31 (masters). February 28, 1997. 75 pages.

The goal of this work is the creation of a system to aid the implementation of a network communication protocol specified in the Abstract Protocol notation. The requirements of this system are primarily faithfulness to the formal notation and at least a semblance of efficiency. To this end, a suite of functions has been written to allow the creation of the processes involved in the protocol, originally specified in the Abstract Protocol notation, in the programming language C. The basic idea of the suite is the registration of functions whose execution affects the protocol state and which are enabled or disabled by the protocol state. The core of the suite is the function engine(), which invokes the functions in response to state changes. Several example protocols and their implementations will be presented, including an evolution of a simple request/reply protocol into a more complex form. Also, a reference has been provided for the functions in the suite.

DOWNLOAD tr96-31.pdf

AI95-236

Choe, Yoonsuck. "Laterally Interconnected Self-Organizing Feature Map in Handwritten Digit Recognition". The University of Texas at Austin, Department of Computer Sciences. Report# AI95-236 (masters). August 1995. 54 pages.

An application of biologically motivated laterally interconnected synergetically self-organizing maps (LISSOM) to off-line recognition of handwritten digit is presented. The lateral connections of the LISSOM map learns the correlation between units through Hebbian learning. As a result, the excitatory connections focus the activity in local patches and lateral connections decorrelate redundant activity on the map. This process forms internal representations for the input that are easier to recognize than the input bitmaps themselves or the activation patterns on a standard Self-Organizing Map (SOM). The recognition rate on a publically available subset of NIST special database 3 with LISSOM is 4.0% higher than that based on SOM, and 15.8\% higher than that based on raw input bitmaps. These results form a promising starting point for building pattern recognition systems with a LISSOM map as a front end.

DOWNLOAD UT-AI-TR-95-236.tar.gz

AI95-238

Blackmore, Justine. "Visualizing High-Dimensional Structure with the Incremental Grid Growing Neural Network". The University of Texas at Austin, Department of Computer Sciences. Report# AI95-238 (masters). August 1995. 53 pages.

Understanding high-dimensional real-world data usually requires learning the structure of the data space. The structure may contain high-dimensional clusters that have important topological relationships. Methods such as merge clustering and self-organizing maps are designed to aid the visualization of such data. However, these methods often fail to capture critical structural properties of the input. Although self-organizing maps capture high-dimensional topology, they do not represent cluster boundaries or discontinuities. Merge clustering extracts clusters, but it does not capture local or global topology. This thesis presents an algorithm that combines the topology-preserving characteristics of self-organizing maps with a flexible, adaptive structure that learns cluster boundaries in the data. It also proposes a method for analyzing the quality of such visualizations, and outlines how it could be used for automatic parameter tuning.

DOWNLOAD UT-AI-TR-95-238.pdf

TR-95-20

Wagle, Yogesh Sharad. "Process Migration in Network Parallel Software Systems for Clusters of Workstations". The University of Texas at Austin, Department of Computer Sciences. Report# TR-95-20 (masters). June 1995. 98 pages.

Workstations integrated by local area networks have become the pervasive mode of computation services for engineers and scientists. It is currently the circumstance that most workstations are dedicated to the use of one or a small number of workers and are used for only a small fraction of their potential service load. There are now a substantial number of software systems which target making more of this potential computation capability available for use. There are two general classes of software systems: those which route normal computational tasks to the idle workstations on a network and those which integrate the network of workstations into a parallel execution environment. The former are typically user-friendly. That is, they respect the ownership rights of the users of the individual workstations and attempt to use only the otherwise unused fraction of the computation capability of the network of workstations. Condor[Bri86] from the University of Wisconsin is one of the most popular of these systems.The second class of software systems are those which create parallel execution environments from the resources of the network. These have not, in the past, been user-friendly. They have preempted the workstations of the network for the duration of the execution of the parallel computation. This disruptive approach has tended to limit the use of these potentially very beneficial systems. This thesis designs, implements and evaluates an extension of the most popular of the parallel execution environments, PVM [Sun89] to be user-friendly. That is, PVM is extended by the work presented in this thesis to utilize in the parallel execution environment it creates only those workstations of the network which are not being used by their normally assigned set of users. Similar efforts are being pursued in other research but there is as yet no such system available. The approach taken in this thesis is to integrate into PVM that functionality from Condor which implements process migration and to add a simple load-balancing strategy [Kyr88]. This approach is a model for creating new system capabilities by synthesis of functionality of existing systems. This thesis focuses on evaluation of the effectiveness of the extension to include process migration and load balancing. The evaluation is accomplished by defining systematic experiments whereby shifting patterns of user activity are superimposed upon an otherwise idle network and the effect on time to completion of parallel benchmark programs with know execution behavior are observed. The primary conclusions of the research are: (i) the synthetic approach to implementation of a significant new capability was quite practical and (ii) the impact of user-friendliness on the performance of parallel programs on networks of workstations is quite tolerable and should be generally adopted.

DOWNLOAD tr95-20.pdf

TR-95-08

Kumar, Avdhesh. "Design Parameters for Multiprocessor Architectures Through Architectural Modeling". The University of Texas at Austin, Department of Computer Sciences. Report# TR-95-08 (masters). March 1995. 85 pages.

This thesis is concerned with the development of a general simulation model of multiprocessor systems which can be used to establish architectural design parameters for implementation of multiprocessor systems in a given technology. Specifically, the architectural design parameters include the number of processors which should be included in a cluster, number of clusters, bandwidth and latency of communication elements. The model has three levels: a technology level, a structure level and a workload level. The level of abstraction has been raised from the instruction trace driven system to a higher level, emphasizing communication and synchronization with the actual execution of the sequential units modelled by an average behavior. The model allows for specification of different numbers of processors with different speeds, different types of communication devices (from buses to cross-point switches) and numerous other architectural variations. The model is validated against two multiprocessor, shared-memory systems executing a known benchmark program and has been used to estimate performance on variations of these system configurations.

DOWNLOAD tr95-08.pdf

TR-95-07

Khedekar, Seema. "Component Library Interface for CODE 2.0 (XCodelib)". The University of Texas at Austin, Department of Computer Sciences. Report# TR-95-07 (masters). March 1995. 76 pages.

Compositional programming (assembly of a program from components which have already been coded and tested) has three enabling technologies: libraries of components, a means for specification of programs in terms of components and a means of identifying and accessing components. Component libraries ranging from graphical interfaces to computational fluid dynamics are becoming available both from commercial sources and as resources on the Internet. The CODE 2.0 programming environment implements composition of parallel programs from components which are sequential units of computation. The remaining requirement is support for identification and accessing of components which are appropriate for a given application. This thesis reports on the design, development and initial use of XCodelib which is a component library interface added to the CODE 2.0 parallel programming environment. XCodelib addresses the requirement for identification and accessing of components which are accessible through the Internet. CODE 2.0 specifies parallel programs as generalized dataflow graphs where the nodes of the graph are units of computation. The Component Library Interface (XCodelib) supports identification of components and the embedding of those components into CODE 2.0 programs. The location of the libraries and the interface to the libraries is uniform across all libraries whether they are local or remote. Identification and accessing of components is not a trivial task. Each of the libraries on the Internet has a different interface and a different mode of search. The overhead in time and effort to use standard general purpose Internet search mechanisms to locate and access components is excessive in the context of program development. XCodelib is composed of three parts: the Client Interface, the Component Database and the Internet Interface. The Component Database houses the local library components and caches components from remote libraries. The Client Interface accepts the search queries formulated by the user and translates them into queries against the Component Database. The Internet Interface transparently searches for components across the Internet. The search expressions for library components in the XCodelib interface are as general as possible, and have few arbitrary restrictions. Feedback is provided during the search query processing to guide the user in formulation of correct search queries. Only relevant information is displayed. There exist multiple modes for displaying the library components. The results returned by a search are displayed by the Client Interface. The relevance feedback mechanism is used for aiding in formulation of queries. XCodelib operates in two internal modes, a Library mode(which allows browsing of the database) and a Search mode where local and remote library componnets are found using keyword searches.

DOWNLOAD tr95-07.pdf

TR-92-13

Barnett, Michael. "A Systolizing Compiler". University of Texas at Austin, Department of Computer Sciences. Report# TR-92-13 (masters). March 1992. 80 pages.

We present an automatic scheme to generate programs for distributed-memory multiprocessors. We begin with a source program that contains no references to concurrency or communication. The source program corresponds to a systolic array: as such, it is a nested loop program with regular data dependences. The loop bounds may be any linear function of enclosing indices and of variables representing the problem size. The target programs are in an abstract syntax that can be translated to any distributed programming language with asynchronous parallelism and rendezvous communication; so far, a translator to occam 2 has been completed. The scheme is based on a formal theory of linear transformations and has been formally verified. The complexity of the scheme is independent of the problem size. It has been implemented. In contrast to other compilation methods, the scheme derives every aspect of the distributed program, including i/o and communication directives. It represents the first complete software realization of general systolic arrays.

DOWNLOAD tr92-13.pdf

AI89-105

Reed-Lade, Margaret. "Grammar Acquisition and Parsing of Semi-Structured Text (Master's Thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI89-105 (masters). May 1989. 162 pages.

This thesis describes software that acquires a grammar for semi-structured text and parses the text into a database. Semi-structured text can be generally characterized as text which is laid out on a page in a structured but not necessarily rigid fashion. The thesis describes related work, presents a user session, provides details of the grammar acquisition and parsing processes, and suggests directions for future work. The software described in the thesis provides a means by which ASCII files of semi-structured text could be automatically entered into a dynamically defined database structure.

DOWNLOAD NOT AVAILABLE

TR-89-38

Juszczak, James Walter. "Efficient Portable Parallel Matrix Computations". University of Texas at Austin, Department of Computer Sciences. Report# TR-89-38 (masters). December 1989. 87 pages.

In this thesis we exercise a method of developing parallel algorithms for matrix computations that facilitates efficient and portable implementations. The method includes defining a set of communication primitives, selecting a storage scheme, embedding a logical communications topology in the physical architecture, and synchronizing data flow and computations to reduce the overhead of communications. Several algorithms are implemented using the column wrapped storage scheme and communication primitives which are independent of the underlying parallel architecture. Theoretical and experimental results are presented.

DOWNLOAD tr89-38.pdf

AI88-79

Spangler, W Scott. "Exploratory Robot Learning of Sensory-Motor Integration (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI88-79 (masters). May 1988. 68 pages.

This thesis described an algorithm for mobile robot learning of sensory-motor coordination. The goal is to learn how to characterize the effect that taking a particular movement action will have on a robot's sense input, given the initial sensory situation. The concepts are learned from the sensory results of taking random actions from within a training environment (Learning from Examples), but these concepts are general enough to apply to unseen environments. The learning algorithm is implemented and tested for a particular type of simulated sonar sensing robot.

DOWNLOAD UT-AI-TR-88-79.pdf

TR-88-32

Sivaramakrishnan, S. "Evaluation of Logical External Memory Architectures for Multiprocessor Systems". University of Texas at Austin, Department of Computer Sciences. Report# TR-88-32 (masters). September 1988. 78 pages.

Parallel structuring of External Memory operations has been a relatively unexplored area in Parallel Processing. This thesis effort describes an effective parallel formulation of two I/O-intensive seismic data processing algorithms on a large shared-memory multiprocessor system. Simulation modeling has been used as the vehicle for performance evaluation. Results suggest that applications which are I/O bound in most uniprocessors would probably become more compute-bound in memory-rich environments that are characteristic of large multiprocessor systems.

DOWNLOAD tr88-32.pdf

TR-88-31

Somalwar, Kiran Kumar. "Data Transfer Scheduling". University of Texas at Austin, Department of Computer Sciences. Report# TR-88-31 (masters). September 1988. 65 pages.

The problem of data transfer scheduling is a problem of multiple resource allocation. It is cast as an edge coloring problem. Several exact algorithms for edge coloring, in the literature, are described. A new algorithm for edge coloring, when no color can be used more than k times, is designed and analyzed. Several inexact algorithms are described and evaluated for their cost effectiveness.

DOWNLOAD tr88-31.pdf

TR-88-22

Sriram, Ramachandran. "A Facility for the Display and Manipulation of Dependency Graphs". University of Texas at Austin, Department of Computer Sciences. Report# TR-88-22 (masters). June 1988. 55 pages.

A general facility is needed for the display and manipulation of the dependency graphs in these research domains. Such a tool must be extendable to provide all the functionality required in a specific area while preserving generality. This thesis describes the development of the Interactive Dependency Graph Analyzer (IDeA), a graphical tool that meets these requirements.

DOWNLOAD tr88-22.pdf

TR-88-13

Easwar, Sivagnanam Ramasundaram. "Relational Database Structure for Storage and Manipulation of Dependency Graphs". University of Texas at Austin, Department of Computer Sciences. Report# TR-88-13 (masters). April 1988. 100 pages.

This thesis provides a first step towards resolution of the problem, of converting sequential Fortran programs to parallel, by capturing the potential parallel computation structure of a Fortran program in a Relational Database. Parallel languages are required to fully utilize the Parallel machines that have been developed. Many Man-years of Sequential Programs (in FORTRAN) have already been written. Re-writing these programs in some parallel language would be almost impossible. The Database produced by this thesis can then be used by other programs, to generate specific parallel computation structures, appropriate for given environments.

DOWNLOAD tr88-13.pdf

AI87-47

Bhattacharjee, A. "Interpreting Narratives with Evaluative Statements (Master's Thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI87-47 (masters). January 1987. 51 pages.

Discourse structure is one of the foremost areas of study in natural language. Discourse is concluded to be governed by structures that go beyond the sentence or the relation between sentences. These structures capture a global purpose, which is the motivation for discourse. A form of such a global structure is an evaluative statement in discourse. This thesis is a case study of narratives and their associated evaluative statements. It is a computational attempt at reflecting the process by which an evaluative statement is used in interpreting narratives.

DOWNLOAD UT-AI-TR-87-47.pdf

AI87-59

Mauladad, Yusuf N. "CALCLAB: An Interactive Environment for Scientific Numerical Problem Solving (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI87-59 (masters). August 1987. 144 pages.

CalcLab is an interactive environment for solving numerical problems in such areas as physics. Objects or situations are described by sets of variables. Physical principles and constraints are expressed as mathematical equations. Variables and equations can be connected to form models to solve numerical problems. A primary goal of this research has been to design and implement a visual environment that is easy and intuitive to use. The user interface requires no programming language expertise. Several features have been incorporated to aid the user by automatically performing simple tasks. Another goal has been to enable the user to build a library of solution models that can be reused or modified. Such models can also be combined with others to solve more complex problems. CalcLab is implemented in GLISP and runs on Xerox 1108 AI Workstations. It has been used to solve physics problems of the kind typically encountered by freshman physics students.

DOWNLOAD UT-AI-TR-87-59.pdf

AI87-60

Tsukada, Takanore. "Using Dominator-Modifier Relations to Disambiguate A Sentence (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI87-60 (masters). August 1987. 108 pages.

This work introduces a system to parse and disambiguate sentences efficiently. The system consists of three parts: a compiler, a parser, and a disambiguation program. Dominator-modifier relations, which are edges between tow words, are produced to represent an ambiguous sentence. The relations are calculated using an efficient parsing method. They represent the ambiguities of a sentence clearly and facilitate ambiguity detection. These pairs and tree structures are produced by the parser and are used later in a disambiguation program. The parser uses a bottom-up parsing algorithm, the LRT(0) algorithm with dynamic programming. A compiler that translates Definite Clause Grammar rules into subprograms of the parser is also described. A disambiguation program is described to show how dominator-modifier relations are used for disambiguation by heuristics. This experiment shows that dominator-modifier relations make it easier to solve lexical and structural ambiguities.

DOWNLOAD UT-AI-TR-87-60.pdf

AI87-63

Ross, Joseph C. "Developing Expert Systems from Examples and Explanations,". The University of Texas at Austin, Department of Computer Sciences. Report# AI87-63 (masters). August 1987. 93 pages.

During the past two decades many expert systems have been developed which perform classification very well in narrow domains. This thesis examines this work, summarizing some useful ideas which have emerged from it, but also identifying the following problems: (1) the construction of these systems has been a very cumbersome process, and (2) the resulting knowledge bases have not been broadly applicable. This thesis proposes than an important reason for these problems has been the failure to retain all knowledge available from an expert during knowledge acquisition. The primary tasks performed by most experts are identified as consisting of (1) classification and (2) explanation. Therefore, a representation is proposed in which classified examples and explanations of the classifications are retained explicitly. This approach greatly simplifies knowledge-base construction, since very little transformation must be performed by the knowledge engineer, and examples provide a natural way to systematically acquire and structure the knowledge. Furthermore, the two types of knowledge complement each other very well, and the representation is shown to be effective in supporting a variety of expert tasks. Finally, this thesis describes an actual implementation, in which these ideas were applied to the development of an expert system for the domain of clinical audiology.

DOWNLOAD UT-AI-TR-87-63.pdf

TR-87-33

Quinlivan III, William F. "A Simulator for Message-based Distributed Systems". University of Texas at Austin, Department of Computer Sciences. Report# TR-87-33 (masters). August 1987. 105 pages.

NO ABSTRACT

DOWNLOAD tr87-33.pdf

TR-87-32

Lakshmi, M. Seetha. "A Study and Analysis of Performance of Distributed Simulation". University of Texas at Austin, Department of Computer Sciences. Report# TR-87-32 (masters). August 1987. 58 pages.

NO ABSTRACT

DOWNLOAD tr87-32.pdf

TR-87-17

Whiting, Dana Mark. "Pads: a Graphical Interface for Software Systems Modeling". University of Texas at Austin, Department of Computer Sciences. Report# TR-87-17 (masters). May 1987. 112 pages.

NO ABSTRACT

DOWNLOAD tr87-17.pdf

TR-87-16

Allison, Eileen Archer. "Virtual Memory on a Reconfigurable Network Architecture". University of Texas at Austin, Department of Computer Sciences. Report# TR-87-16 (masters). May 1987. 62 pages.

NO ABSTRACT

DOWNLOAD tr87-16.pdf

TR-87-15

Anderson, James R. "Development of an Analog Neural Network Model of Computation". University of Texas at Austin, Department of Computer Sciences. Report# TR-87-15 (masters). May 1987. 42 pages.

Neural networks have been proposed as computational models for difficult problems such as content addressable memory, constraint satisfaction, and optimization. Two types of models have emerged: 1) networks of two-state linear threshold units which are analyzed using methods of statistical physics; and 2) networks of analog devices which are described by a system of coupled non-linear differential equations. A mean field relationship between these two models has been alluded to in the literature, but an explicit treatment of such has not appeared. This thesis establishes a computational correspondence between these two neural network models. A Markov process analysis of networks of two-state neurons leads to a set of non-linear differential equations similar to those describing analog neural networks. The derived equations are shown to yeild the same steady-state solution as those of the analog neural network models. A portion of the thesis is dedicated to providing an explanation by analogy of the computation performed by neural networks. This analogy is useful in understanding the computational correspondence between the two neural network models. The development of the analog neural network model as provided in this thesis is important in that it provides a general mechanism for the analysis of networks of two-state neurons. This mechanism provides the basis of the computational correspondence between the two neural network models.

DOWNLOAD tr87-15.pdf

AI86-21

Wong, Wing-Kwong. "GT: A Conjecture Generator for Graph Theory (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI86-21 (masters). January 1986. 84 pages.

The process of knowledge acquisition can be automated with programs that learn from observation and discovery. A better understanding of this learning strategy would facilitate the construction of expert systems and the exploration of scientific domains. GT, a frame-based program written for this thesis, learns relationships among classes of graphs by observing the examples and non-examples of these classes. The relationships considered are subclass, superclass, exclusion, and complement exclusion. GT's knowledge is partly represented as frames and partly as formulas of predicate calculus. The learned relationships are stored hierarchically. GT has found non-trivial, well-known theorems of graph theory.

DOWNLOAD UT-AI-TR-86-21.pdf

AI86-30

Alcouffe, Phillipe M. "Metaphorical Shift and The Induction of Similarities (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI86-30 (masters). July 1986. 139 pages.

This thesis presents a view of metaphor as the syntactic realization of an underlying cognitive process: analogical reasoning. The semantic changes resulting from the 'anomalous' juxtaposition of the metaphor's referents can be represented in a conceptual semantic space by rules of construal. These rules describe the semantic changes induced by a comparison component part of analogical reasoning. The meaning of metaphor is then a set of valid transferred similarities. Some similarities are pre-encoded as part of our general knowledge and are `discovered' through the transfer process, whereas other similarities are `created'. A taxonomy of conventional metaphorical themes is presented that supports the creativity of metaphor by the means of similarities induced by metaphorical shifts and inherited through this taxonomy.

DOWNLOAD UT-AI-TR-86-30.pdf

AI86-31

Rath, Christopher A. "A Rule Language for the GLISP Programming System (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI86-31 (masters). August 1986. 130 pages.

GLISP is a programming language written by Dr. Gordon Novak at the University of Texas at Austin. It is an object oriented language based on LISP, a list processing language used in artificial intelligence programming. GLISP is often used in the university environment for the construction of expert systems programs, yet it has no formal rule language. The thesis is the design and implementation of a general rule language incorporated into GLISP. The project includes an investigation into currently implemented rule languages, an implementation of the rule language compiler in InterLisp, and sample applications. The resulting system is expected to be efficient, general purpose, and easy to learn for those programmers already familiar with GLISP.

DOWNLOAD UT-AI-TR-86-31.pdf

AI86-33

Petrie, Charles J. "New Algorithms for Dependency-Directed Backtracking (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI86-33 (masters). September 1986. 51 pages.

The problem of providing dependency-directed backtracking in a Doyle-style Truth Maintenance System (TMS) is solved with three algorithms. The first modifies Doyle's basic algorithm by eliminating the necessity for conditional proof justifications. Unlike other attempts to do so, this method maintains the capability of the TMS to provide complete explanations from justifications. The second algorithm, which eliminates the generation of a maximal assumption set, provides a novel description of backtracking as a simple search. Finally, an extension of this algorithm allows contradiction resolution to be extended as an inference technique for expert systems. The dependency-directed backtracking algorithms presented are also the first to correctly ensure that an assertion is not justified unnecessarily and that unsatisfiable circularities are not created.

DOWNLOAD UT-AI-TR-86-33.pdf

AI86-38

Chee, Chin Yang. "Intuitionistic Modelling and Simulation of Mechanical Devices (Master's Thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI86-38 (masters). December 1986. 83 pages.

The objective of this thesis is to explore and develop representational schemes to represent both the textual and pictorial description of a mechanical device (e.g. an engine) to produce an explanation of the device's operation by generating an animation of the device while highlighting the part of the text being simulated. For this purpose we have built a prototype which includes a highly interactive and hierarchical 2-D graphics editor to construct/edit devices and a semantic parser which parses the text into events of the device operation which is then used as a "script" for animating the device.

DOWNLOAD UT-AI-TR-86-38.pdf

TR-86-22

McShea, Mary Kathleen. "Evaluation of Parallel Programming Languages". University of Texas at Austin, Department of Computer Sciences. Report# TR-86-22 (masters). September 1986. 128 pages.

NO ABSTRACT

DOWNLOAD tr86-22.pdf

AI85-12

Winghart, Olivier. "Computational Treatment of Metaphor in Text Understanding: A First Approach (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI85-12 (masters). August 1985. 83 pages.

This thesis starts with a general presentation of the phenomenon of metaphor, necessary to define our object of study. It also contains an ordered presentation of the current research approaches to deal computationally with metaphors in text analysis. As an illustration, a case study system that analyses a mundane text (containing metaphors) is included, with a description of its functioning. Finally, more general issues are raised, that require elucidation to enlarge the field of application of such systems.

DOWNLOAD NOT AVAILABLE

AI85-16

Hill, Rick. "Negotiated Interfaces for Software Reusability (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI85-16 (masters). December 1985. 142 pages.

This thesis presents an algorithm which constructs an interface between a user's structure and a generic program, allowing the GLISP compiler to specialize the generic program to work with the user's data structure. The interface between the user's data and the data structure expected by the generic program is "negotiated" by the user through menu selection.

DOWNLOAD NOT AVAILABLE

AI85-18

Wan, Man-Lee. "Menu-Based Creation of Procedures for Display of Data Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI85-18 (masters). December 1985. 51 pages.

We have investigated methods of creation of interface procedures that extract data needed by a library display procedure from user data of arbitrary structure and present it to the display procedure for generation of a display for presentation to the user. A procedure is created by the GLISP compiler from the following pieces of information: 1. the knowledge about how to produce the display, which is stored as a library procedure. 2. description of the user's data in the GLISP structure description language. 3. selection of the data from the user object that are to be displayed. 4. specification of the data needed by the display procedure. A system called LINK has been written that creates and runs interface procedures automatically using the information outlined above. The LINK system has been integrated with the GEV data inspector of the GLISP system.

DOWNLOAD NOT AVAILABLE

TR-85-27

O'Dell, Robert W. "On the Parallel Structuring of Resource Management". University of Texas at Austin, Department of Computer Sciences. Report# TR-85-27 (masters). November 1985. 72 pages.

NO ABSTRACT

DOWNLOAD tr85-27.pdf

AI84-02

Karimi, Ezat. "Computing Discourse Conceptual Coherence: A Means to Contextual Reference Resolution (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI84-02 (masters). August 1984. 92 pages.

This thesis discusses the problem central to the interpretation of the discourse of a text: contextual reference resolution. The problem itself devolves to problems about the nature of the inferencing mechanism, knowledge base organization and discourse representation. A framework for an inferencing mechanism based on a theory of discourse coherence and focusing is discussed. A framework is described for a knowledge base which is composed of the knowledge about entities (through frame structures) and the knowledge about the coherence relations between different event/states. A model for discourse representation based on a small set of intersentential (coherence) relations and the relations between the conceptual structures for the entities discussed in the text is also presented.

DOWNLOAD tr84-28.pdf

AI84-03

Yu, Yeong-Ho. "Translating Horn Clauses From English (Master's thesis)". The University of Texas at Austin, Department of Computer Sciences. Report# AI84-03 (masters). August 1984. 81 pages.

This work introduces a small grammar which translates a subset of English into Horn Clauses. The parallel between the syntactic structures of sentences and corresponding literals of their Procedural Logic representation facilitates the translation. An interpreter is also described which accepts English descriptions of problem solving methods and facts about an example domain, the Blocks World. The interpreter translates then into Horn Clauses by using the grammar, and interprets them as programs and data. It also carries out commands on data by using the programs, and answers queries about data. In addition, it provides a mechanism called "Consistency Rules" which maintains integrity of the database. This experiment shows that Procedural Logic provides a unified system to accomplish a wide variety of computations, and that a small grammar without any semantic or domain-specific knowledge can translate a small subset of English sentences into Horn Clauses. Further research on several other domains is suggested to evaluate the usefulness of this translation.

DOWNLOAD tr84-29.pdf

TR-84-29

Yu, Yeong-Ho. "Translating Horn Clauses From English". University of Texas at Austin, Department of Computer Sciences. Report# TR-84-29 (masters). September 1984. 81 pages.

This work introduces a small grammar which translates a subset of English into Horn Clauses. The parallel between the syntactic structures of sentences and corresponding literals of their Procedural Logic representation facilitates the translation. An interpreter is also described which accepts English descriptions of problem solving methods and facts about an example domain, the Blocks World. The interpreter translates them into Horn Clauses by using the grammar, and interprets them as programs and data. It also carries out commands on data by using the programs, and answers queries about data. In addition, it provides a mechanism called "Consistency Rules II which maintains integrity of the database. This experiments shows that Procedural Logic provides a unified system to accomplish a wide variety of computations, and that a small grammar without any semantic or domainspecific knowledge can translate a small subset of English sentences into Horn Clauses. Further researches on several other domains are suggested to evaluate the usefulness of this translation.

DOWNLOAD tr84-29.pdf

TR-84-28

Karimi, Ezat. "Computing Discourse Conceptual Coherence: A Means to Contextual Reference Resolution". University of Texas at Austin, Department of Computer Sciences. Report# TR-84-28 (masters). September 1984. 92 pages.

NO ABSTRACT

DOWNLOAD tr84-28.pdf

TR-84-08

Tseng, Su-Hui Hsu. "Questioning English Text with Clausal Logic Case Studies of Three Texts". University of Texas at Austin, Department of Computer Sciences. Report# TR-84-08 (masters). March 1984. 74 pages.

NO ABSTRACT

DOWNLOAD tr84-08.pdf

TR-83-229

Ramakrishnan, I. V. "A Theory of Mapping Homogeneous Graphs on a Linear Array Processor Model". University of Texas at Austin, Department of Computer Sciences. Report# TR-83-229 (masters). May 1983. 111 pages.

NO ABSTRACT

DOWNLOAD NOT AVAILABLE

TR-82-206

Boney, Joel F. "Analysis of the M6809 Instruction Set". University of Texas at Austin, Department of Computer Sciences. Report# TR-82-206 (masters). August 1982. 119 pages.

NO ABSTRACT

DOWNLOAD tr81-206a.pdf tr81-206c.pdf

TR-82-190

Springen, N. L. "A Source-to-source Translator From Pascal to C". University of Texas at Austin, Department of Computer Sciences. Report# TR-82-190 (masters). January 1982. 144 pages.

NO ABSTRACT

DOWNLOAD NOT AVAILABLE

TR-78-79

Clifford, W. D. "Using Debug Tools to Help Produce Correct Programs". University of Texas at Austin, Department of Computer Sciences. Report# TR-78-79 (masters). May 1978. 63 pages.

NO ABSTRACT

DOWNLOAD tr78-79.pdf

TR-78-78

Renaud, D. J. "The UT Virtual Machine Monitor". University of Texas at Austin, Department of Computer Sciences. Report# TR-78-78 (masters). May 1978. 73 pages.

NO ABSTRACT

DOWNLOAD tr78-78a.pdf tr78-78b.pdf

TR-76-61

Jea, Y. H. "Memory Request Scheduling in Multiprocessor Multi-memory Systems". University of Texas at Austin, Department of Computer Sciences. Report# TR-76-61 (masters). August 1976. 67 pages.

NO ABSTRACT

DOWNLOAD tr76-61.pdf

TR-76-58

Cruz, V. D. "A Study of Placement Algorithms in a Non-paged Environment". University of Texas at Austin, Department of Computer Sciences. Report# TR-76-58 (masters). May 1976. 84 pages.

NO ABSTRACT

DOWNLOAD tr76-58a.pdf tr76-58b.pdf

TR-75-51

Cunningham, M. L. "Data Integrity Management in Data Base Systems". University of Texas at Austin, Department of Computer Sciences. Report# TR-75-51 (masters). August 1975. 50 pages.

NO ABSTRACT

DOWNLOAD tr75-51.pdf

TR-75-47

Lee, J. F. "A Survey of Programming Methodologies". University of Texas at Austin, Department of Computer Sciences. Report# TR-75-47 (masters). May 1975. 123 pages.

NO ABSTRACT

DOWNLOAD tr75-47a.pdf tr75-47b.pdf

TR-74-42

Kostentsky, C. T. "Computer Assisted Instruction Via Graphics Terminals". University of Texas at Austin, Department of Computer Sciences. Report# TR-74-42 (masters). August 1974. 85 pages.

NO ABSTRACT

DOWNLOAD tr74-42a.pdf tr74-42b.pdf

TR-74-41

Cocek, B. C. "The Case Description Generator". University of Texas at Austin, Department of Computer Sciences. Report# TR-74-41 (masters). August 1974. 95 pages.

NO ABSTRACT

DOWNLOAD tr74-41a.pdf tr74-41b.pdf

TR-74-38

Haller, A. "Automatic Program Analysis". University of Texas at Austin, Department of Computer Sciences. Report# TR-74-38 (masters). August 1974. 101 pages.

NO ABSTRACT

DOWNLOAD tr74-38a.pdf tr74-38b.pdf

TR-74-31

Brown, R. M. "Analytic Model of a Large Scale Interactive System Including the Effects Of Finite Main Memory". University of Texas at Austin, Department of Computer Sciences. Report# TR-74-31 (masters). May 1974. 85 pages.

NO ABSTRACT

DOWNLOAD tr74-31.pdf

TR-73-21

Wesson, R. W. "A Pair-grammar Based String-to-graph Translator Writing System". University of Texas at Austin, Department of Computer Sciences. Report# TR-73-21 (masters). August 1973. 127 pages.

NO ABSTRACT

DOWNLOAD tr73-21.pdf tr73-21b.pdf tr73-21c.pdf

TR-73-19

Wang, Y. Y. L. "A Nucleus Verification Condition Compiler". University of Texas at Austin, Department of Computer Sciences. Report# TR-73-19 (masters). May 1973. 74 pages.

NO ABSTRACT

DOWNLOAD tr73-19.pdf

TR-73-14

Josue, E. "The Nucleus Compiler". University of Texas at Austin, Department of Computer Sciences. Report# TR-73-14 (masters). May 1973. 110 pages.

NO ABSTRACT

DOWNLOAD tr73-14.pdf tr73-14appen.pdf

TR-72-8

Lasseter, G. L. "A Model of Predictive CPU Schedules of Known Certainty". University of Texas at Austin, Department of Computer Sciences. Report# TR-72-8 (masters). December 1972. 42 pages.

NO ABSTRACT

DOWNLOAD tr72-08.pdf

TR-72-7

Lee, C. C. A. "Queueing Models of Device Utilization in Multiprogrammed Computer Systems". University of Texas at Austin, Department of Computer Sciences. Report# TR-72-7 (masters). December 1972. 46 pages.

NO ABSTRACT

DOWNLOAD tr72-07.pdf tr72-07a.pdf

For help please contact trcenter@cs.utexas.edu