From d78645adcbad2ba7424154595e5390745f645a6d Mon Sep 17 00:00:00 2001 From: David Flamme Date: Wed, 2 Nov 2011 12:57:23 +0100 Subject: [PATCH] Documentation for the ABC Analysis * doc/abc_optimization/* : Detailed elaboration on abc analysis in the cacaovm written by Klaus Braeutigam --- .../BakkalaureatSeminararbeitKB2010.tex | 1209 +++++++++++++++++ doc/abc_optimization/bibfile.bib | 854 ++++++++++++ doc/abc_optimization/gwa.sty | 156 +++ doc/abc_optimization/ngerman.sty | 680 +++++++++ 4 files changed, 2899 insertions(+) create mode 100644 doc/abc_optimization/BakkalaureatSeminararbeitKB2010.tex create mode 100644 doc/abc_optimization/bibfile.bib create mode 100644 doc/abc_optimization/gwa.sty create mode 100644 doc/abc_optimization/ngerman.sty diff --git a/doc/abc_optimization/BakkalaureatSeminararbeitKB2010.tex b/doc/abc_optimization/BakkalaureatSeminararbeitKB2010.tex new file mode 100644 index 000000000..a0924a1ff --- /dev/null +++ b/doc/abc_optimization/BakkalaureatSeminararbeitKB2010.tex @@ -0,0 +1,1209 @@ +\documentclass[11pt,a4paper]{article} + +\usepackage{gwa} +\usepackage{graphicx} +\usepackage{url} +\usepackage{ngerman} +\usepackage[latin1]{inputenc} +\usepackage{textcomp} +\usepackage{multicol} + +\begin{document} + +\title{Optimization of Array Bounds Checking in the Context of Cacao Java Virtual Machine} + +\author{ + \authN{Bräutigam Klaus}\\ + \matNr{0825965}\\ + \authI{Institute of Computer Languages}\\ + \authU{Technical University of Vienna}\\ + \email{e0825965@student.tuwien.ac.at}\\\\ + \authN{Bachelor Thesis BSc. in Informatics - Software and Information + Engineering 033 534 }\\ + \authN{supervised by Univ.Prof. Dr. Andreas Krall} +} + +\maketitle + +\begin{multicols}{2} + +%% Abstract ------------------------------------------------------------------- + +\abstract{ +This document deals with array bounds check optimizations in a very general and historical +way but also strongly relates to the details of implementation within the +Cacao Java Virtual Machine that was developed by the Institute of Computer Languages +of the Technical University of Vienna. Good and sustainable problem solutions +can only be found if previous ideas are studied to improve own considerations +and even more important if failures that happened in the past are not repeated. +So the first main part is dedicated to a study of previous +approaches concerning array bounds check optimization and their environments +to obtain a solid knowledge base for the general problem. +The second part describes details and relations important +to understand the technique choosen for the Cacao Java VM. So the document should +also be interesting for people only searching for a comprehensive overview of +the abstract array bounds check optimization problem. + +} + +%% Introduction --------------------------------------------------------------- + +\section{Introduction} +Array bounds checking and the optimizations necessary for efficient +realization were exhaustively researched and discussed in the past four +or five decades. The rise of high level computer programming languages led +directly to several improvements of the background and environments of them. +For example array accesses have always been common malfunction sources and +security holes. If an array access index is not within the bounds of the array, +memory space is accessed illegally. This can lead to dramatic problems. +Therefore computer language architects recognized very fast that +these array accesses must be validated before execution to increase the safety and +stability of applications. So in most cases array access indices are compared +to the array bounds before executing them. But this introduces some +kind of overhead during compilation and or execution. Due to the fact that +processed information amount increases much faster than our technological +calculation possibilities any type of overhead must be reduced or completely +removed from applications in any thinkable way. As will be seen in the +following chapters the approaches and techniques for reducing overhead of +array bounds checks can be very different. To completely understand the +problem of array bounds checks optimization (short ABCs) all related aspects +must be taken into account and all topics must also be seen from a more +abstract view. + +%% General aspects of ABCs ---------------------------------------------------- + +\section{A Generic Reflection of ABCs\label{sec:structure}} +This chapter provides a solid overview for ABC optimizations. The +most common forms of implementation are explained and the neccesity for +improvements of ABC optimizations are motivated. Furthermore some background +information will be provided to get a feeling for the relations of the topic. +But it must be kept in mind that an exhaustive explanation of the topic is +far out of the documents scope so we will only touch on the general ideas. + + \subsection{Implementation of Array Bounds Checking} + Before designing new optmization techniques for ABCs it's important to know + exactly how these checks are implemented. There are many ways to realize + ABCs. Elaborateness and performance are important to keep in mind when + choosing a technique and also the insertion method is interesting because + it is greatly connected to the language architecture. Should tests + be integreated automatically without developers knowledge or should + ABCs only be integrated selectively by programmers or quality assurance. + + \subsubsection{Compare Instruction Strategy} + Most implementations are done using compare instructions of the + processor and a conditional jump to react to failed comparisons. For this + technique the array inception and length are obtained and compared to + the index. For a more detailed description and good examples please + consult + \cite{AComparisonOfArrayBoundsCheckingOnSuperscalarAndVLIWArchitectures:2002}. + For some languages like Java that assure that arrays only start + with index null only one check is necessary to check lower and + upper bounds using an unsigned compare instruction. This assures that + indices are always within the specified array bounds and that exceptions + or other reactions are done on the right point of program execution. + + \subsubsection{Hardware Implementation} + It is also possible to implement ABCs directly and completely in + hardware. This means some more effort in processor architecture but + the resulting run-time overhead of ABC execution can be reduced + practically to 0.0\%. There have been several approaches from Sun + Microsystems, Inc. to design a new generation of processors that are + capable of this. + \cite{ProcessorWithAcceleratedArrayAcessBoundsChecking:2000} + for example describes a processor that uses two hardware comparators and + a disjunction to inform the instruction execution unit about the validness + of the array access instruction. + \cite{ApparatusAndMethodForArrayBoundsCheckingWithAShadowFile:2001} is an + even more improved way. In addition it uses a shadow register to further + boost and integrate ABCs into the general instruction flow of the processor. + + \subsubsection{Other Approaches} + Other more exotic approaches of ABC implementation also exist. + An interesting one can be found in + \cite{ArrayBoundsCheckEliminationUtilizingAPageProtectionMechanism:2003} + where the processor page protection mechanism is used to identify ABCs and + even improve their effectiveness by eliminating unnecessary ones. + + \subsection{The Real Overhead Introduced by ABCs} + Before crying out for costly and time-consuming optimizations for a specific + mechanism we will analyze the overhead caused by this. Is it + really worth while studying and researching the topic? This is in many cases a difficult + question because future trends must also be considered to get a clear result. + Concerning ABCs it depends on the ABC implementation, the language + design and architecture and on the application itself. + + \subsubsection{Overhead Reduction through Hardware} + For decades information processing machines were mainly designed as single + processor systems that didn't really considered modern programming techniques + and patterns. Nearly all higher-level improvements that were reached in + computer programming were done in an abstract way using software + (virtual machines, multiprocessing, null pointer checks, ABCs\ldots). In the past + there were some approaches that tried to completely integrate ABCs in hardware, but + these were refused very fast. Nevertheless we can + view something in connection to ABCs by comparing benchmarks of the same + application on the same virtual machines running on different processor + generations. For example take overhead benchmark results from + \cite{AVerifiableControlFlowAwareConstraintAnalyzerForBoundsCheckElimination:2009} + and compare them to that of + \cite{SafeMultiphaseBondsCheckEliminationInJava:2010} and it's easy to + recognize that the same application on the same virtual machine with the same + configuration produces a different overhead. In this case the reason can be + found very fast. The first benchmark was calculated on a single processor + system where the second uses a multicore processor. This new processor + generation is able to the reduce the overhead of ABCs without changing the + technique or implementation of ABCs. So the question if ABC optimization pays + not only depends on the software but also on the hardware it is running on. + Newer processor generations will be able to furthermore reduce the overhead + raised by ABCs implemented as processor instructions but it is very unlikely + that the approaches of implementing ABCs completely in hardware will be + realized in the near future. + + \subsubsection{Overhead in Different Types of Applications} + To reduce overhead of ABCs the application where the optimization operates on + must use arrays frequently otherwise the overhead produced is insignificant + and not really measureable\cite{SafeMultiphaseBondsCheckEliminationInJava:2010}. + An important question is if these optimizations really pay for many + applications. For example Java is mainly used due to it's simple and clear + object oriented and abstract nature. Most developers use Java only in + conjunction with powerful frameworks like Spring and many of them only for + web-based applications. The resulting ABC overhead in these strong object + oriented applications is practically not measurable even if many arrays + are used. The overhead is mainly caused by nested method calls (invokes) which + are also very difficult to reduce. ABC optimizations only really pay on + applications with simple object relations that make excessive use of arrays + and matrix calculations like scientific research and development. But for + these cases it is maybe better and easier to use extremely optimized data + structures that are recognized by machines and executed in a special way. + \cite{EliminationOfJavaArrayBoundsChecksInThePresenceOfIndirection:2002} + mentions some libraries and also explain some new special data structures + that follow this approach. + + \subsubsection{Necessarity for ABC Optimization} + It's difficult to argue pro/contra to ABC optimization and the approach that + should be taken to implement it because this strongly depends on the language + architecture and many other factors discussed here. But in general it should + be the architects and implementors target to evolve the language to the most + effective, efficient and clear state. Even if hardware is capable of executing + ABCs without any resulting overhead it is better to eliminate unnecessary code + to improve low-level code quality cleanness and clearance. Program code at + any abstraction layer should not contain any superfluous instructions or code + parts even if they have no impact on execution time or security. Moreover some + slight processor architectures (embedded systems) maybe will never be capable + of implementing ABC in hardware! + + \subsection{Important Relations to ABC Optimization} + This chapter explains some important basic conditions that should + be noticed before starting to think about ABC optimization techniques but + some of them are valid for nearly all kinds of optimizations. At first the array + access index types are important to be studied in detail to identify possible + limitations raised by them. The next important fact is that optimizations + only pay if their benefits outperform the generated overhead. It should + also be kept in mind that the elaborateness and form of implementation + dramatically influences the effectiveness of the optimization. + + \subsubsection{Array Access Index Types} + The array access index type must also be taken into consideration for + several reasons. Most computer programming languages (from now on referred + to as languages) of even all programming paradigms accept the type integer + as array index type. The word length of this type varies in most of the cases and + furthermore signs of integral numerical types can also be treated differently. + In C/C++ integer types exist as signed or unsigned and the length is not specified + and depends mainly on the instruction length of the underlying hardware. + On the other side Java declares it's integer type explicitly signed and + 32bit wide (range from -2.147.483.648 to 2.147.483.647 in 2's complement). + The Java language furthermore declares that overflowing or underflowing values + are wrapped around which is not desired for array access index types because + there are hardly reasonable applications that access arrays using overflowing index + expressions. \cite{JavaLanguageSpecificationv3.0:2005} + This source of malfunctions and security leaks must also be tackled precisely + while designing optimization mechanisms. + + \subsubsection{Benefits to Execution Time Ratio} + The ABC optimization benefits must always outperform the execution + time necessary to calculate them. It is not acceptable that an optimization is + implemented that saves execution time for some applications and slows others + down. There is nothing to gain. + \cite{AComprehensiveApproachToArrayBoundsCheckEliminationForJava:2002} + for example shows that activated ABC optimization together with others + slows down some applications of the used benchmark. This is extremely important + to consider for optimization techniques that work during Just-In-Time + compilation and execution where expensive, elaborate and in the first + instance disturbing processes must be omitted as far as possible. As + mentioned in the next chapter this is maybe best accomplished by splitting + optimization techniques into different phases to locate them where they are + unproblematic to run. + + \subsubsection{Intermediate Representation} + Which intermediate representation should the optimization operate on? This + has to be considered too. Most compilers fulfill their + compilation tasks in several phases and for this reason different intermediate + representations of the programs form. These depend completely on the compiler + architecture and the language itself especially on the programming paradigm + \cite{ModernCompilerImplementationInC:1998}. + But not only during the compilation process do new program representations + arise. For interpreted languages some kind of IR is necessary to transport + code between compiler and interpreter. For example the Java Byte Code is such + an IR that is delivered to the Java Virtual Machine and executed. If the + interpreter wants to apply some optimizations to the code it is forced to + operate on the intermediate representation delivered to him or to do an + extremely expensive transformation into an other desired form which slows down + execution even more. Java Byte Code for example is not very adequate to serve + as IR because it suffers from many problems concerning size, security\ldots + and is not really practical for exhaustive optimizations. Finally it should + not be forgotten that also high level code (Java, C++) can be a very + suitable representation for obtaining some optimization constraints!\par + + A lot of effort has been spent in using Java Byte Code annotations for + integrating optimization information into conventional class files that are + ignored by most compilers and allowing some to boost execution performance. + But these approaches have also a lot of disadvantages because the annotations + are abused, the usage is not standardized, further increased size and process + amount\ldots. + \cite{AFrameworkForOptimizingJavaUsingAttributes:2000} describes an example + framework that follows the previously discussed one. Also + \cite{VerifiableAnnotationsForEmbeddedJavaEnvironments:2005} can be an + interesting source for the beginning because it focuses on especially on + three important optimizations and on low cost checks. Furthermore + \cite{VerifiableRangeAnalysisAnnotationsForArrayBoundsCheckElimination:2007} + and \cite{SafeBoundsCheckAnnotations:2008} deal with annotations specifically + for ABC optimizations. In the historic review many more approaches will be + mentioned that are based on this construct.\par + + Another way to deal with the problem of IR is to use a completely new approach + and in the past several forms based on Static Single Assignment (SSA) where + researched and proven to be the commonly used ones in the future. The form is based on + the constraint that every variable can have only one point of assignment and + not more or less. But many more constraints and ideas were integrated into + different modern SSA forms that were adapted and optimized for several + reasons and techniques. For getting a general idea of SSA forms consult + \cite{AVerifiableSSAProgramRepresentationForAgressiveCompilerOptimization:2006}. + \cite{SafeTSAATypeSafeAndReferentiallySecureMobileCodeRepresentationBasedOnStaticSingleAssignmentForm:2001} + explains SafeTSA which is a type safe and size optimized SSA form and used in + \cite{UsingTheSafeTSARepresentationToBoostThePerformanceOfAnExistingJavaVirtualMachine:2002} + to optimize execution performance of a Java Virtual Machine. Another approach + is covered by + \cite{AVerifiableSSAProgramRepresentationForAgressiveCompilerOptimization:2006} + that focuses on clean and efficient embedding of range checks into other aggressive + optimization phases. + \cite{ABCDEliminatingArrayBoundsChecksOnDemand:2000} uses a new representation + by introducing new constraints for dealing efficiently with program branches + (if-else, while\ldots) called Extended SSA form (eSSA). + Another SSA form named HSSA is introduced in conjunction with a new + optimization algorithm ABCE in + \cite{ArrayBoundsCheckEliminationForJavaBasedOnSparseRepresentation:2009}. + Finally + \cite{TowardsArrayBoundCheckEliminationInJavaTMVirtualMachineLanguage:1999} + even introduces a new Java Byte Code representation called JVMLa that is also + suitable for several optimization tasks. Nearly all of these systems are not + commonly used and are therefore not standardized but show how future + intermediate representations could be designed or influenced by different + ideas. + + \subsubsection{Positioning of Optimizations} + Being generally aware of all previously discussed circumstances another very + important aspect must be recognized regarding positioning of optimizations. + The main differentiation can be seen between compiled and interpreted + languages.\par + + Compiled languages are restricted to optimizations that can be calculated at + compile time (mainly static optimizations). This imposes really big + limitations on the possible and suitable techniques but expensive + computations do not therefore really mean big problems. High compile + time is widely accepted with special optimization level + parameters (for example -O3 for most C compilers). For development and + debugging the optimizations are not really necessary so they are omitted and + for deployment one optimized version is created where expensive and long + running optimizations are not seen problematic.\par + + For interpreted language architectures new aspects must be taken into account. + On the one hand the possibility for powerful highly dynamic optimizations + arise that use information only available at run time but on the other hand + it is unacceptable to compute expensive optimizations because it would slow + down execution dramatically. So the most successful approach would probably be + to split up dynamic optimizations to compile- and run time parts. + Expensive precalculations are delegated to the compiler and fast and effective + optimizations that use information obtained at compile- and run time are + calculated by the interpreter (virtual machine). This approach was + exhaustively researched by most recently published ABC optimization + techniques. At compile time some constraints and equalities/unequalities are + infered from high level code and intermediate representations through + theorem proving and even speculation and special optimizing virtual machines + interpret these constraints and annotations to calculate very effective + optimizations efficiently. + \cite{SafeMultiphaseBondsCheckEliminationInJava:2010} and all previous + versions of the same authors + \cite{AVerifiableControlFlowAwareConstraintAnalyzerForBoundsCheckElimination:2009}, + \cite{SpeculativeImprovementsToVerfifiableBoundsCheckElimination:2008}\ldots + are recommended for understanding the complete idea. Also most previously + mentioned approaches that use annotations in any form (Soot framework) + basically follow it.\par + + One must also consider that optimizations must fit into the compilation and + interpretation process. Optimizations should not be an isolated process that + is quickly added to the compiler/interpreter architecture at the end of the + design process. Moreover optimizations and related techniques and + especially used intermediate representations should be taken into + account very early in language architecture design to get clean, clear, + effective and efficient compilation and interpretation processes overall + \cite{ModernCompilerImplementationInC:1998}. + Nearly all mechanisms researched in the context of Java and the Java Virtual + Machines suffer from the drawback that many optimizations are added to the + constructs too late and are therefore not really suitable for the complete + systems. + + \subsection{Types of ABCs} + ABCs can always be found before array access instructions. In Java the + instructions are provided for every primitive type as xaload and xastore where + x represents the array type (integer, float, signed byte, character\ldots). + But not only the array access instructions itself are important for ABCs but + also the context were they appear. Therefore some placements of + ABCs are of special significance. These are now described a little bit in + detail to understand which forms of interest can arise. The first three + classify if ABCs can be proven statically, dynamic or never as really + necessary during execution for keeping to the Java conventions. The second + three deal with the code structure where these ABCs occure. A good explanation + concerning raw classification of ABCs in general can be found in + \cite{AFreshLookAtOptimizingArrayBoundChecking:1990} and more recently by the + same author in + \cite{OptimizingArrayBoundChecksUsingFlowAnalysis:1993}. + + \subsubsection{Redundant ABCs} + Some ABCs that are inserted by primitive insertion algorithms are completely + redundant and can be statically proven unnecessary at all. For example three + sequential array accesses a[i] and a[i+1] and a[i+2] lead to three inserted + ABCs but the first two of them are completely redundant because the third + dominates them. The overall number of these checks is very low and the + execution time amount occupied by them even lower. So there + are hardly reasonable optimizations possible through elimination of them but + they are very easy and cheap to compute and remove so it should be done for + reasons of cleaness and completeness. These are only important to be + removed if they occur within loops. + + \subsubsection{Partial Redundant ABCs} + Some ABCs are related through different program branches that arise because of + if-else or while statements that lead to ABCs that are partially redundant. + Sometimes they can be simplified or partially eliminated. These ABCs also arise + relatively often in programs and therefore optimization through + elimination or code motion can really pay. For a detailed explanation + on this topic please consult the good considerations in + \cite{GlobalOptimizationBySuppressionOfPartialRedundancies:1979}. + + \subsubsection{Necessary ABCs} + There are ABCs in existence that exactly represent what they are invented for. + They are placed in front of array access instructions that can't be proven to + be valid so there are ABCs that can not be eliminated but sometimes simplified + or moved using code motion procedures. A good and simple example for this are + array access indices read from command line or from an external data source + like files or a database! + They occur in dependence on the application from seldom to very often. Whether an ABC + can be proven superfluous or necessary depends stronly on the power of the + used optimization technique. Sometimes array accesses that deal with + indirection or interprocedural array access program structures can't be proven + unnecessary although they are. This will also be explained in the chapter about global ABCs. + Advanced techniques that are capable of proving these exotic kinds of + redundant or partial redundant ABCs can sometimes eliminate or simplify big + amounts of elaborate and unnecessary ABCs. + + \subsubsection{Local ABCs} + Local ABCs are found in straight program structures without branches, loops + and other conditional statements. These code parts are executed + sequentially without possible interruptions and are generally very simple and + short. Local ABCs are in most cases redundant and can be completely eliminated + but don't have much impact on program execution time. + + \subsubsection{Global ABCs} + Global ABCs are much more difficult. We must at first distinguish between + interprocedural and intraprocedural global ABCs. If several method invokes are + integrated into one array access it is hard to automatically analyze the + structure and obtain constraints especially in languages like Java with + dynamic class loading. These are called interprocedural. Only very few + techniques have the power to prove such elaborate ABCs unnecessary. If + ABCs are only analyzed in the context of one method without + additional method invokes they are called intraprocedural. Also array accesses + where indices are again placed in arrays (indirection) can be very challenging + to prove redudant. A technique that pays special attention to ABCs in the + presence of indirection is + \cite{EliminationOfJavaArrayBoundsChecksInThePresenceOfIndirection:2002} that + gives a solid insight into the difficulties raised by this topic. Due to the + frequent occurence in combination with matrix calculations and other costly + procedures optimizations of global ABCs can reduce execution time of + programs. + + \subsubsection{ABCs in Loops} + ABCs that are placed in loops deserve special attention because here + the biggest runtime overheads of all possible ABC types arise. If unnecessary ABCs + are placed in loops they are often repeated and result in huge + fractions of execution time. Sometimes Local and Global ABCs are found in + loops and can be eliminated or simplified and sometimes ABCs can even be moved + out of the loop using code motion techniques. This allows dramatic optimizations. + One problem that arises is that some optimizations change the program behavior + and especially the exception throw conventions of the Java language can + introduce a big additional effort. All this must be taken into account. ABC loop + optimization is so important and powerful that some techniques reach good + improvements only by analyzing some common loop structures/patterns. For + example + \cite{ArrayBoundsCheckEliminationInTheContextOfDeoptimization:2009} describes + the mechanisms implemented in the HotSpot Java Virtual Machine that is based + on this idea. + +%% Historic review of ABCs ---------------------------------------------------- + +\section{A Historic Review of ABCs and Related Optimization} +The presentation of history concerning ABC optimizations will be done in a +chronological order to underline the whole evolution. To get a quick insight into +the different documents some kind of classification will always be provided that +keeps as far as possible to parameters explained in the past chapter so that it +should be possible to quickly understand the raw idea described. + + \subsection{The Early Beginning} + The first approaches for dealing with ABC optimization all operate on high level + code representation (no additional intermediate representation necessary)and + use data flow graphs to represent program parts in an abstract manner. The real + ideas of ABCs and optimizations in this direction can't be explicitly + attributed to one person.\par + + In the authors opinion William H. Harrison was the + first that directly covered the problem. In May 1977 he describes in + \cite{CompilerAnalysisOfTheValueRangesForVariables:1977} the more + general problem of value range analysis of variables. Array bound checks are a + subproblem of this because array indices are only numerical variables that must + keep to value ranges (0..arraylength). He introduced in his paper an algorithm + called Range Propagation that obtains variable value ranges starting + with [-Inf:+Inf] by analyzing the related programing structures for each + variable [0..100] for example. Another algorithm Range Analysis even + covers the problem of induction with variables and therefore mainly concerns + loops. The algorithm builds through derivations linear equations that are then + solved based on a data flow analysis. He also introduced the idea of loop + counting, general program diagnostic and program termination that can identify + several mistakes and malfunctions to the programmer. This can be seen as the + real origin of the idea how to insert and optimize ABCs effectively. His + algorithms are inserted into the conventional compilation process and he even + considered overall execution time of compilation in his algorithms.\par + + In 1977 nearly at the same time Norihisa Suzuki and Kiyoshi Ishihata composed + \cite{ImplementationOfAnArrayBoundChecker:1977} where they explicitly insert + ABCs in high level code in the form of assert statements in front of + array accesses to provide comfortable error information. This is done by + deducing formulas from the program and using a theorem prover to decide if an + ABC (assertion) is necessary for an access instruction or not. Because only static + analysis is done no real powerful optimizations can be applied. Especially the + intraprocedural form limits the general success but most local and even some + global ABCs are tackled. All things considered this paper is the first really + modern approach for dealing with theorem proving of ABCs and definitly a good + starting point for studying the topic in depth.\par + + In 1979 E. Morel and C. Renvoise published + \cite{GlobalOptimizationBySuppressionOfPartialRedundancies:1979} that + introduced the idea of formal consideration of partial redundancies that are + also necessary for more elaborte analyzing of Global ABCs. For really + understanding partial redundant ABCs this paper is a must.\par + + In 1982 Victoria Markstein, John Cocke and Peter Markstein presented a new + approach for optimizing ABCs in + \cite{OptimizationOfRangeChecking:1982}. For the first time the idea is + documented that code parts with high execution frequencies (loops) deserve + special attention (analysis effort). They introduced usage of subexpression + elimination and code motion for optimizing ABCs. The technique operates on the + code by deriving Strongly Connected Regions SCRs using USE and DEF + functions. These were also used previously by + \cite{CompilerAnalysisOfTheValueRangesForVariables:1977} albeit the + explanations were much more complicated and detailed. The technique is based on + data flow analysis and meant to be integrated into a multiphase compiler after + all previous optimizations were accomplished. The same authors even obtained an + USPatent in 1987 for their ABC optimization technique + \cite{OptimizationOfRangeChecking:1987}.\par + + In 1990 Rajiv Gupta published results of his researches on ABC + optimization in + \cite{AFreshLookAtOptimizingArrayBoundChecking:1990}. He used a + so called Minimal Control Flow Graph MCFG to implement some new + revolutionary approaches like elimination of redudant ABCs, combination of + partial redundant ABCs and propagation of ABCs out of loops (code motion). It + exactly describes how the MCFG is built from source code and also the + algorithms to perform elimination, propagation and combination of ABCs. This + gives very precise guidance for first experiments with ABC optimization + algorithms and abstract representations. + + + + In 1992 Jonathan M. Asuru presented in + \cite{OptimizationOfArraySubscriptRangeChecks:1992} an approach that started dealing + more precisely with Global ABCs that definitely allow more important + optimizations. He claimed that previous techniques where not able to deal with + elaborating loops (nested loops, nonconstant induction variables\ldots) and he + was right. He described two new mechanisms that were able to deal with much + more complicated program constructs than all others before. + \begin{description} + \item[Inner-Loop Guard Elimination ILGE] deals generally with conventional + known local and global ABC types. + + \item[Conservative Expression Substitution CES] deals with nested loops and + partial order for related range checks. Also important with CES is Local Range + Check Optimization where the greatest lower bound and least upper bounds are + used to eliminate range checks that are redundant or partially redundant. + Furthermore Loop Relative Global Range Check Optimization builds up a system + of structures to improve nested loops checking and finally Range Check + Propagation uses range check motion to move ABCs to more efficient program + places. + \end{description} \par + + In 1993 Rajiv Gupta again published his new insights into the topic in + \cite{OptimizingArrayBoundChecksUsingFlowAnalysis:1993}. Many refinements to + his previous ideas were added and the knowledge about the general problem of ABC + optimization increased. He assumed that readers are familiar with his last work + \cite{AFreshLookAtOptimizingArrayBoundChecking:1990}. + \begin{description} + \item[Local Optimization] deals with Identical Checks, Subsumed Checks with + Identical Bounds and Subsumed Checks with Identical Subscript Expressions + + \item[Global Optimizations] are implemented using at first an algorithm for + modifying checks to create Redundant Checks and secondly an algorithm for the + reelimination of Redundant Checks. This important idea of first inserting new + checks and eliminating them is brilliant and will be used by further + techniques in the future. It allows the identification and resolution of difficult + redundancies. + + \item[Propagation of Checks Out of Loops] is done using a refined algorithm + for propagation. + \end{description} \par + + \subsection{The Chaotic Middle Ages} + Until 1995 (ca. the first two decades) only a very small group of visioners + researched ABCs and related optimization techniques. All these approaches + operated for performance and technological reasons directly on high level + program code like J. Welsh for Pascal + \cite{EconomicRangeChecksInPascal:1978} and some others for C and Fortran. + All things considered all previous approaches were based on the same general + idea of building flow graphs and applying data flow analysis. But since 1995 + many scientists started researching ABC optimization and therefore a chaotic + amount of new techniques and ideas appeared. From 1995 on most approaches are + based on SSA program intermediate representations and algorithms suitable for + processing optimizations on it but also some exotic and completely new ideas + were invented and researched. \par + + In 1995 Priyadarshan Kolte and Michael Wolfe published + \cite{EliminationOfRedundantArraySubscriptRangeChecks:1995} where they explain + their ABC optimization techniques in the Nascent Fortran compiler. They use an + SSA intermediate form for induction analysis and furthermore a Check + Implication Graph CIG for global ABCs. They tried different approaches but also + mentioned that a combination of them would deliver the best results. Also some + new and more detailed information about Partial Redundant Elimination PRE is + provided that directly covers optimization of partial redundant ABCs. + \begin{description} + \item[No-Insertion NI] means optimization with check elimination without adding + any new checks that are simpler. + \item[Safe-Earliest SE] achieves a little bit more eliminations than LNI and NI + but not much and is the first of two PRE placement checks. + \item[Latest-Not-Isolated LNI] is only a second PRE placement check. + \item[Check Strengthening CS] is already described in + \cite{OptimizingArrayBoundChecksUsingFlowAnalysis:1993}. + \item[Preheader-Insertion LI] with only loop invariant checks. + \item[Preheader-Insertion Loop Limit Substitution LLS] is the clear winner if + compile time and elimination number are taken into account. + \end{description} \par + This paper proves in the end that optimized check insertion techniques are much + better than only elimination techniques. \par + + In 1998 Hongwei Xi and Frank Pfenning dealt in + \cite{EliminatingArrayBoundCheckingThroughDependentTypes:1998} with ABC + optimization in ML. So this time ABC optimization also gets very interesting + for other programming paradigms like functional programming.\par + But also S. P. Midkiff and J. E. Moreira and M. Snir published a new work + concerning this topic. + \cite{OptimizingArrayReferenceCheckingInJavaPrograms:1998} is a + very voluminous paper that directly covers the problem of ABC optimization for + the Java language. At the beginning a very formal theory about the topic is + presented and then all optimization techniques are explained in detail and + always presented with good examples on high level Java source code. This work + gives really comprehensive insight into the topic in relation to the Java + language. \par + + In 1999 the paper + \cite{TowardsArrayBoundCheckEliminationInJavaTMVirtualMachineLanguage:1999} + from Xi and Xia introduced a new idea. They did not agree with the standard + Java Virtual Machine Language JVML (Java Byte Code) and wanted to cope with + ABCs through a new system of dependent types to improve safeness of the + language and allow at the same time successful optimizations. So they + architectured a new intermediate code representation called JVMLa that should + be created by the Java Compiler and that should only contain ABCs that can't be + proven unnecessary at compile-time. This approach is questionable because the + Java Compiler output (JVML) is standardized and wide spread and a very + complicated ABC optimization part (at compile time) is assumed to function + already properly in the Java Compiler. This is very inpractical but a theoretic + idea that may by taken into account by other approaches or languages in the + future. \par + + In 2000 the previously mentioned USPatent + \cite{ProcessorWithAcceleratedArrayAcessBoundsChecking:2000} was registered and + Bodik, Gupta and Sarkar presented a new algorithm for optimized ABC execution in + \cite{ABCDEliminatingArrayBoundsChecksOnDemand:2000}. This algorithm is an + exotic one because it uses a transformation process at run time that converts + from Java Byte Code to a slightly extended SSA form (eSSA) which is normaly a + very expensive procedure and applies a constraint analyzer and solver on it that + really achieves performance gain. The used constraint analyzer adds + constraints to the eSSA instructions and build up an inequality graph IG + where only a shortest path finder (constraint solver) is applied to optimize + ABCs. This is very impressive because the whole approach is very costly but it + really pays like depicted in the results. Array Bound Check elimination on + Demand (ABCD) was really able to improve execution time of some applications in + the used benchmark. \par + + In 2001, one year after the last USPatent concerning hardware implementation of + ABCs, + \cite{ApparatusAndMethodForArrayBoundsCheckingWithAShadowFile:2001} was + registered as a new idea. Also Chin, Khoo and Xu published in + \cite{DerivingPreconditionsForArrayBoundCheckElimination:2001} some further + ideas how to derive preconditions for ABCs. This very formal approach and + documentation gives some more detailed insights into the problem of program + code analysis for optimizing ABCs. \par + + In 2002 Bentley, Watterson and Lowenthal compared execution of ABC execution on + Superscalar and VLIW hardware architecture in + \cite{AComparisonOfArrayBoundsCheckingOnSuperscalarAndVLIWArchitectures:2002} + which gives good explanations concerning implementation of ABCs using processor + compare instructions. \par + + Qian, Hendren and Verbrugge present three algorithms + they implemented using the Soot framework previously mentioned in IBMs HPCJ in + \cite{AComprehensiveApproachToArrayBoundsCheckEliminationForJava:2002}. The + first algorithm Variable Constraint Analysis (VCA) works on a simplified + directed weighted graph that reflects the relations and behavior of relevant + variables (array access indices). Array Field Analysis AFA and Rectangular + Array Analysis deal with special forms of array structures that are hard to + analyze but can in some circiumstances really allow good optimizations. \par + + Gurd and Freeman et al. presented a further paper + \cite{EliminationOfJavaArrayBoundsChecksInThePresenceOfIndirection:2002} that + exactly captures the problem of ABCs in the presence of indirection what + simply means that array access indices are stored in an other array. + Optimizations in this scope are not easy and require a new technique. + The solution is not embedded in the Java Compiler or VM itself + but implemented directly as Java classes that must be used by programmers. + Three classes with different approaches are presented but only one of them + seems really useable. + \begin{description} + \item[ImmutableIntMultiarray:] The array is checked for invalid bounds and is + not allowed to be modified during index access operations! + \item[MutableImmutableStateIntMultiarray]: The array can be in two states, in + the first (mutable) bounds are checked and in the second (immutable) no checks + occur. + \item[ValueBoundedIntMultiarray:] This array forces that all entries (indices) + are within the defined range and so no exception is possible. + \end{description} + The idea and all of the explanations and details about matrix + calculations and related ABCs are really good but generally the practical + benefits are rare because Java especially wants to permit that program style of + classes has impact on run time and performance because this is exactly what + Java wants to abstract from. \par + + In 2002 Gupta, Pratt, Midkiff and Moreira registered the USPatent + \cite{MethodForOptimizingArrayBoundsChecksInPrograms:2002} where they describe a + very general method for optimizing ABCs in many facets. \par + + In 2003 Motohiro Kawahito published in an interesting and very exotic idea for ABC + \cite{ArrayBoundsCheckEliminationUtilizingAPageProtectionMechanism:2003} + elimination by exploiting the hardware processors page protection mechanism. + It is based on the idea to place an array to the end/beginning of a page and + inspect if an access leaves the valid page. If the page protection throws an + exception it can be derived that the array bounds were violated. \par + + In 2005 Nguyen and Irigoin presented with + \cite{EfficientAndEffectiveArrayBoundChecking:2005} + a document that implements in a FORTRAN compiler (PIPS) some + optimizations of ABCs. The general ideas are well known but the detailed + and formal descriptions give further insights into the details of the + whole problem. The first idea is based on Guptas approach. Insert ABCs and + eliminate redudant ones. Furthermore insert ABCs that can't be proven + unnecessary. And the last important thing is an algorithm for array size + checking and analysis to prove array access vioaltions over different procedure + calls. So the technique allows interprocedural ABC optimization that is a real + challenge and rarely implemented anywhere. \par + + Another large document + \cite{SymbolicBoundsAnalysisOfPointersArrayIndicesAndAccessedMemoryRegions:2005} + was publisehd by Rugina and Rinard where they describe a new framework that + deals with general problems like access memory regions but also with bounds + analysis of pointer array indices. Due to the extensiveness of it any + explanation about this is widely out of this documents scope. \par + + In 2007 two new directions were started that are explained more precisely in the + next chapter. \par + + In 2008 Popeea, Xu and Chin presented in + \cite{APracticalAndPreciseInferenceAndSpecializerForArrayBoundChecksElimination:2008} + theory and experiments that use forward relational analysis to obtain + postconditions and backward precondition derivation to inference precise + (path, context sensitive) and practical pre/postconditions for an inference + algorithm to reduce ABCs. Two phases inference and specialization (inserts + necessary checks) are used to obtain the right pre/postconditions. Weak and + strong derivation processes lead to the results. In the analysis phase + Presburger algorithm is used that has high complexity and therefore the number + of formulas must be reduced as far as possible. Flexivariants of methods mean + that due to their usage the conditions can be weaker or stronger, so different + condition strengths are calculated and stored for maximum of efficiency and + minimum of memory effort. The technique also deals with array indirections. + It works on a high level artificial language IMP that allows no loops but + recursion (fix-point analysis) and also has some other restrictions. The inference + algorithm must be run at compile time and some cases were really expensive + (\(>\)60min)! So it is not suitable for run-time checks! The approach takes interprocedural + behavior into account but without dynamic class loading, because the analysis + works bottom-up. The work is in general an extremely formal approach that makes + hard use of formulas, foreign symbols and hard to understand mathematical + definitions so the contribution is in best case of theoretical nature but not + really suitable in practice. + + \subsection{State-Of-The-Art Methodologies} + The previous path through history was very chaotic but since 2007 two + real main directions arise that have the potential to be, in the end, satisfactory for + solving the problem of ABC optimization. The first approach is implemented + already in the HotSpot Java Virtual Machine and uses a fast transformation from + Java Byte Code to an SSA based intermediate representation to apply low cost + but extremely effective analysis techniques for well known code patterns and + code structures to improve ABC optimization quality quite good and precise. The + second approach can be summarized under the name Multiphase Bound Check + Elimination (MBCE) where Java Compiler and Java Virtual Machine process separated + parts of the ABC optimization process to get high quality and highly efficient results. + The benchmarks and measurements are also suitable to reveal the real impact and + efficiency of the presented and tested techniques. \par + + In 2007 Würthinger, Wimmer and Mössenböck published + \cite{ArrayBoundsCheckEliminationForTheJavaHotSpotTMClientCompiler:2007} were + they present insights into the general functionality of the HotSpot Java VM and + the ABC optimization technique they use. Their algorithm optimizes some well + known programming patterns in Java to eliminate redundant checks and to move + partial redundant checks out of loops but keep in mind deopimization for not + provable check eliminations. This means that if checks can't be proven valid + the VM stops JIT compilation usage and goes back (ORS) to interpretation mode + (deoptimization) that is slower but easily allows keeping to the Java language + conventions like throwing exceptions at the right place\ldots. But results show + no significant increase in optimization, because the part of execution time is + not very high in the used benchmarks! A maximum of 5\% was possible! The + solution is clean, keeps to widely accepted standards and optimizes ABCs if + they occur very often. \par + + Another publication + \cite{VerifiableRangeAnalysisAnnotationsForArrayBoundsCheckElimination:2007} + from 2007 is the beginning of the most recently researched + technique of von Ronne, Psarris and Niedzielski. This document describes + verifiable annotations based on SafeTSA for optimizing ABCs by shifting slow + optimizations to a Java to SafeTSA compiler and let the JIT compiler do fast + and also efficient optimizations based on the created annotations. This allows + usage of expensive optimization techniques in time critical virtual machines + (JIT compilers). The interesting problem of integer + overflow/underflow that is not taken into account by many other techniques but + means a security and safety leak is also mentioned. This should be taken into consideration for + any implementation to react accordingly to over- and underflows. The theory of + the build constraints and linear inequality/equalities and the prover are + strongly related to + \cite{ABCDEliminatingArrayBoundsChecksOnDemand:2000} but provide + some improvements and are also meant to be only integrated into low level code + (SafeTSA) as annotations and therefore use much more expensive techniques than + ABCD can. \par + + In 2008 Nguyen and Keryell published + \cite{EfficientIntraproceduralArrayBoundChecking:2008} that describes a + further framework that is based on deriving expensive linear equalities at + compile-time and integrating them as annotations into a special form of + intermediate representation like SafeTSA for example. This approach is strongly + connected to the previous paper + \cite{VerifiableRangeAnalysisAnnotationsForArrayBoundsCheckElimination:2007} + (same introduction). They implemented a verifyable annotation generator that + is based on Jikes RVM 2.2.0. Its use extended Fourier-Motzkin's variable + elimination (FMVE) to derive proofs. The framework presented is called VBCE + Verifiable Bound Check Elimination and can be directly compared to VDF (Verifiable + Data Flow Analysis). The intraprocedural approach that is not able to deal with + interprocedural relations, works on high level code at compile time and + annotations and is therefore not suitable for direct JIT optimization. + Loop invariants are not taken into account. \par + + Furthermore in this year Gampe, von Ronne and Niedzielski et al. published two + updates + \cite{SafeBoundsCheckAnnotations:2008} + and + \cite{SpeculativeImprovementsToVerfifiableBoundsCheckElimination:2008} + of their previous work + \cite{VerifiableRangeAnalysisAnnotationsForArrayBoundsCheckElimination:2007} + that introduce speculation and newer mechanisms for annotations into the + optimization technique. The papers improves the previous framework further. + It also works on high level code at compile but now takes loop invariants into + account. + + To get the relationship of their publications on the + topic these papers should be studied at least at a glance. + + In 2009 Würthinger, Wimmer and Mössenböck revealed a further update of their + approach in + \cite{ArrayBoundsCheckEliminationInTheContextOfDeoptimization:2009} that + introduces further improvements and optimizations in the HotSpot Java VM. + The document is logically strongly related to + \cite{ArrayBoundsCheckEliminationForTheJavaHotSpotTMClientCompiler:2007} + and explains that code duplication (loop versioning) is avoided using the + interpretation mode of the VM to retain exception semantics for PRCs. + It provides a good explanation of all parts especially PRC in loops and why loops + with additional exit conditions can't be optimized. Furthermore conditions are + not managed in an equality/inequality graph but for every instruction. + Interprocedural optimizations are not taken into account due to dynamic class + loading mechanism of Java. Loading new classes leads to more information about + array acessing and can lead to deoptimization (OSR at safepoint back to stack) + if the new conditions can't prove the previously optimizations. But several + ABCs can be grouped if proveable which leads to further optimizations. All + things considered a really suitable ABC optimization technique clearly + embedded in the VM and very effective goind by the test results. \par + + \cite{ArrayBoundsCheckEliminationForJavaBasedOnSparseRepresentation:2009} from + Yang, Huang and Yang is another publication from 2009 that introduces new + algorithm ABCE that is based on HSSA form and can be used in a Java static + compiler. It builds an inequality graph and eliminates FRC (fully redudant + checks) and PRC (partially redundant checks) on it in two phases. The work is + based on the Openjc java compiler that is restricted to very few platforms and + not widespread. This compiler uses a five phase generation style where in the + fourth phase the optimization is done on code that was transformed to HSSA + form and back. That means a lot of time effort. The quality and embedding of + the algorithm into the compiler is suitable but not useable for run time + optimizations and therefore too restricted. Furthermore the article only uses + percentage of ABC elimination as benchmark and a very intransparent + performance score graphic! \par + + In the same year Gampe, von Ronne and Niedzielski et al. published another + update + \cite{AVerifiableControlFlowAwareConstraintAnalyzerForBoundsCheckElimination:2009} + of their previous work + \cite{VerifiableRangeAnalysisAnnotationsForArrayBoundsCheckElimination:2007} + that further improves their ABC optimization technique. In the document a new + more modern technique is presented that is also based on eSSA of the ABCD + algorithm and claimed to be 10\% more efficient than on SafeTSA. Then CAS + (constraint analysis system) a verifiable symbolic program constraint analyser + is presented. The approach is like previous ones also based on multiphase and + FVME. \par + + From 2010 the last known publication about ABC optimization is + \cite{SafeMultiphaseBondsCheckEliminationInJava:2010} again from Gampe, von + Ronne and Niedzielski et al. They refined the details of their + multiphase ABC optimization technique based on SafeTSA with annotations, CAS + and elaborate speculations. The results and also the additional provided + information about the topic show that the solution is practically satisfying. + The most challenging problems + connected to ABCs are solved and the solution is well embedded into a Java + Compiler and Java VM. They are not widespread but solve the problem of ABCs as + far as possible to be both efficient and convenient. + + \subsection{Historical Conclusion} + Lastest research in ABC optimizations prove the assumption that only an early and + clean integration into the compilation and or interpretation process together + with suitable intermediate representations and multiple optimization phases + lead to practically satisfying results that really pay and tackle all newly + introduced problems. For Java the lately improved approach of the HotSpot Java + VM and the multiphase bound check elimination MBCE promise practical success + because all other previous techniques suffer from many drawbacks that can't + really be solved due to the overall environment of the language architecture, + hardware, future developments and other aspects of reality. + +%% ABCs in the Cacao Java VM -------------------------------------------------- + +\section{ABCs in Cacao Java Virtual Machine} + + \subsection{Introduction to Cacao Java Virtual Machine} + The work on the Cacao Java VM was started in 1996 by Rainhard Grafl and + Andreas Krall. They wanted to create a free, efficient and stable Java VM for + a wide variety of processor architectures and platforms. In the following years + many people joined the development team and implemented several extensions, + refinements and more and more architectures were provided. The implementation + is mostly done in C and some parts were already refactored to C++. The VM is partitioned + into modules (*.h/*.hpp with *.c/*.cpp) and keeps to the relevant Java standards + concerning behavior and general structure. For more detailed information about the + Cacao Java VM consult + \cite{HandbookForJavaCacaoVirtualMachine:1996} and + \cite{CacaoA64BitJavaVMJustInTimeCompiler:1997}. + + \subsection{Array Bounds Checking in Cacao Java Virtual Machine} + In this chapter we will describe in general all necessary aspects concerning ABCs. + The Cacao Java VM uses integer compare instructions of the processor to implement + ABCs. They are generated together with all other code parts during JIT compilation + by the code generators from a special internal intermediate representation whose form + is strongly related to quadruples and derived from the Java Byte Code provided by + Java class files. ABCs are positioned in front of array accesses. + The JIT compilation is done locally on every method invocation and therefore only + allows intraprocedural optimizations, so it's impossible to solve intermediate ABC + optimization because all underlying and parallel methods must be analyzed. Some + experiments with example applications showed that execution with ABCs turned on leads + to significant execution time overhead. For some applications this overhead is + enormous so in consideration of all properties of the Cacao Java VM, its environment and + implementation an ABC optimization would be able to drastically reduce execution + time for some types of applications. In the past this motivated the creation of + an ABC optimizer in the Cacao Java VM. + + \subsection{Array Bounds Check Optimization in Cacao Java Virtual Machine} + In the year 1998 Christopher Krügel added an ABC optimizer for loops to the Cacao Java VM. This led to + improvements concerning execution time of array intensive applications. For the + reasons previously explained the optimization algorithm design was limited by + many constraints so the author only concentrated on the most important topics. + A useful generic introduction into the optimizer can be obtained from + \cite{HandbookForJavaCacaoVirtualMachine:1996} in chapter 5.7 Loop Optimization. + + \subsubsection{Power and Properties of the ABC Optimizer} + The intermediate representation (IR) used within the Cacao Java VM is not really suitable + for complicated high performance optimization like for example SSA based IRs are. This + has historical reasons, because these representation formats got famous in the past decade + and also have disadvantages so that their usage must be carefully taken into account + during the whole design process of an abstract machine or compiler. + For this reason the algorithm design was in the first instance limited by the complexity raised + by analysis and transformations of this quadruple related IR. The time consumption of the + optimizer must be recognized. + Secondly the JIT compiler architecture limited the optimization scope to one local method. + This further constraints possibilities because no global ABCs can be removed. + Due to the internal architecture of the Cacao Java VM + the structures and algorithms grew very complicated and so the developer only focused on the + most important facts concerning ABC optimization. The author concentrated only on optimization within + loops and even furthermore he restricted his algorithm to loops with induction + variables with constant grow factors (up or down). Maybe + the algorithm designer used the argument that many empirical tests in the past showed that these + kinds of loops introduce most of the execution time overhead because they are very often and commonly + used by programmers in many programming languages. The results he presented proved the concept to + be generally suitable. So he did not analyze detailed theoretical differences on redudant/ + partial redudant or necessary/local/global ABCs. He simply concentrated on ABCs that seem to + cause the most overhead in loops within normal applications. + + \subsubsection{Generic Design Issues of the ABC Optimizer} + This chapter will provide a generic overview over the design of the ABC optimizer + and the added improvements. A detailed explanation is, due to the amount of information, + not possible within the document's scope. For a further functional explanation please consult + \cite{HandbookForJavaCacaoVirtualMachine:1996} + and for the implementational details directly the comments in the Cacao Java VM source code. + \par + The general idea of the algorithm is to analyze a method during its JIT compilation process + for loops. This is accomplished by a depth first search of code blocks (basic blocks), then + calculating the domintators of the blocks using the Lengauer-Tarjan algorithm + \cite{AFastAlgorithmForFindingDominatorsInAFlowGraph:1979} + and detecting all loops by searching for back edges + \cite{ModernCompilerImplementationInC:1998}. These are very time consuming tasks because all + nodes (basic blocks) of the method must be iterated several times one every method compilation. + After this happend identical loops are merged, nested loops resolved and all together they are topologically sorted. + The loops are optimized in their sorted order by analyzing their loop headers and the variables/constants + involved in array access instructions. If properties are found that allow optimization the + loop and its header are adapted to suit optimal execution behavior. If nested loops exist their + combinatorical behavior must be considered. + The loop header must be adapted in such a way that new dynamic and static constraints introduced by code motion + out of the loop are inserted. In general only loops can be optimized where array index variables + are constant (full), only increment (lower bound) or only decrement (upper bound) by constant values. + Loop headers with or statements can't be optimized by the algorithm as cases + where array index variables are modified within exception handling (catch) blocks. + The original loops are replaced by the optimized (memory layout) with a conditional + jump to the originals if some dynamic constraints don't hold. + Also important for the optimization design is to notice that the original version of the loop is + held in memory parallel to the optimized one to allow correct execution with ABCs in the case + that some dynamic constraints don't hold or can't be proved. This introduces a lot of time + and space overhead during jit compilation because of the copy procedure. The correct patching + of links between code blocks and an exact exception handling add even more complexity to the + solution. So every optimized loop exists two times within the memory. + + \subsubsection{Implementation of the ABC Optimizer} + + This chapter is dedicated to give an abstract but fundamental introduction + to the structure of the ABC optimization algorithm. At first + the most important data structures are explained to get an idea what kind + of data is necessary for the processes (temporary and in the end). Then we will + have a clear look at the algorithm in form of an abstract pseudocode that represents + generally all important functions and the behavior overall. The functions are + named to be self explanatory to express their meaning but for understanding the + ideas behind it maybe a good beginning would be to first consult the good generic + textual description found in + \cite{HandbookForJavaCacaoVirtualMachine:1996} in chapter 5.7 Loop Optimization. + This could give some useful introducing information before studying the real behavior. + +\end{multicols} + + The most important data structures to understand the ABC optimization algorithm: + + \begin{description} + + \item[LoopData (ld)] is a data structure that contains all necessary + intermediate information and states necessary during the ABC + optimization like detected loops, exception graphs, auxiliary + information to improve algorithm performance... + + \item[JITData (jd)] contains all information about the just-in-time + compiled method that is currently optimized. This structure + contains for example the methods code blocks (basic blocks), + the number of codeblocks, exception table... + + \item[LoopContainer (lc)] that holds all information about a specific + loop, like the contained code blocks for example. + + \item[BasicBlocks (bb)] is a container that contains code of one + atomic code part of the method (no branch/jump) instruction + contained. So this means that a basic block is garanteed to + be executed in one exact sequence. + + \end{description} + + So the most important data structures are now identified and the algorithm + can be described in pseudocode.\\ + + Preconditions are that the method must be completely parsed, analyzed + and all information must be correctly stored within the related + jitdata structure. + + %% Codesection ----------------------------------------------------------------- + + \begin{verbatim} + + depthFirstSearchOnMethodBasicBlocks (jd) + exploreDominators (ld) + detectLoopsInMethod (jd, ld) + analyzeDoubleLoopHeaders (ld) + analyzeNestedLoops (jd, ld) + for all LoopContainers (lc) { + optimizeSingleLoop (jd, ld, lc) + } + redirectMethodEntryBlock (jd, ld) + + optimizeSingleLoop (jd, ld, lc) { + analyzeLoopForArrayAccesses (jd, ld, lc) + if ( analyzeLoopForOrAndExceptions (jd, ld, lc) == true ) { + initializeLoopHeaderConstraints (jd, ld, lc) + removeHeaderBoundChecks (jd, ld, lc) + for all BasicBlocks (bb) { + removeBoundChecks (jd, ld, lc) + } + insertChecksIntoLoopHeader (jd, ld) + } + } + + initializeLoopHeaderConstraints (jd, ld, lc) { + if ( last Instruction in header is comparison against constant ) + obtainLeftArgumentDetails (ld, lc) + } else if ( last Instruction in header is standard comparison ) { + obtainLeftArgumentDetails (ld, lc) + obtainRightArgumentDetails (ld, lc) + } else { + // unknown constraints for these conditions + } + if ( both sides contain changing or both have no index variables ) { + // no constraints can be proven valid + } else { + force index variable to be on left side + ForceRightSideToBeConstant (ld, lc) + obtainDynamicConstraints (ld, lc) + } + } + + removeBoundChecks (jd, ld, lc) { + mergeConstraintsWithNestedLoops (ld, lc) + for all Instructions of BasicBlock (bb) { + if ( Instruction is accessing array ) { + obtainIndexVariableDetails (ld, bb) + calculateBoundConstraints (jd, ld, bb) + createLoopHeaderCodeMotionChecks (jd, ld, bb) + } + } + for all successors of BasicBlock (bb) { + removeBoundChecks (jd, ld, bb) + } + } + + insertChecksIntoLoopHeader (jd, ld) { + copyCompleteLoop (jd, ld) + createBasicBlockForCodeMotionChecks (jd, ld) + insertGotoInstructionsForHandlingNewLoopHeaderBasicBlock (jd, ld) + adaptExceptionHandlingRegions (jd, ld) + informParentLoopsAboutNewHeader (jd, ld) + for all LoopHeaderCodeMotionChecks { + createInstructionsForConstraintChecks (jd, ld) + insertInstructionsIntoNewLoopHeader (jd, ld) + } + updateJumpTargetsOfLoops (jd, ld) + updateExceptionHandlingForNewAndDuplicatedLoop (jd, ld) + } + + \end{verbatim} + + %% Codesection end ------------------------------------------------------------- + + Postconditions are that the methods jitdata structure holds correct, executable + and optimized loop code blocks that have no semantic differences to the original form.\newline + + Keep in mind that this pseudocode representation is due to the documents scope + very generic so for more detailed information it's maybe best to directly consult + source code with its comments because it's hard to go further into details and + preserving overview. + + \subsubsection{Results and Performance of the ABC Optimizer} + This chapter presents results that were taken from an application that sieves prime numbers. + + \begin{table}[!ht] + \caption{Benchmark Results of ABCs in Cacao} + \label{BenchResABCsInCacao} + \begin{center} + \begin{tabular}{lccc} + Configuration&Light&Normal&Hard\\ + \hline + Runs&10&5&2\\ + Normal&5,563&55,195&551,355\\ + without ABCs -cb&5,562&55,192&551,414\\ + with -oloop&5,560&55,182&551,350\\ + \hline + \end{tabular} + \par\medskip\footnotesize + The measurements are selected to rise by a factor of 10 every level starting from 50000/10000. + \end{center} + \end{table} + + From this data with polynomic behavior a fixed per centage of speed acceleration should + be proven but it's easy to see that there are no differences between these configurations. + This encourages the assumption that ABCs are always created within the Cacao Java VM whatever + arguments are passed. The proof for this assumption will be done using assembler code output + from the machine application. + +\begin{multicols}{2} + + \subsubsection{Possible Improvements for the ABC Optimizer} + The algorithm is due to its constraints of the virtual machine architecture a very conservative approach + and has no really complicated mechanisms like speculation/probability calculations or exhaustive + stack exploration. The implementation is done in C (structured programming technique) so + it's not really likely to find exhaustive performance improvements in its implementation, but like + always some possibilities remain that could lead to better results. A very expensive + part of the ABC optimization algorithm includes the necessarity for loop detection at the beginning. + This is accomplished by usage of the Lengauer-Tarjan algorithm presented in + \cite{AFastAlgorithmForFindingDominatorsInAFlowGraph:1979}. + This algorithm has good properties concerning asymptotic complexity and is especially + used by mathematicians therefore the preferred solution for very big graphs (\(>\) 30.000 nodes) + and also presented for example in + \cite{ModernCompilerImplementationInC:1998}. + But in practice methods and nested loops are in general much smaller and own only + tens or hundreds of nodes in their control flow graphs. Cooper, Harvey and Kennedy showed in + \cite{ASImpleFastDominanceAlgorithm:2001} + that an other algorithm that has higher asymptotic complexity than the Lengauer-Tarjan performs much + better for smaller graphs because the complexity issue is only really valid for very big (asymptotic) graphs + that normaly do not occur in most other programs. The authors claim that their algorithm is for + conventional method and loop graphs 2,5 times faster than the Lengauer-Tarjan algorithm. If the + experiments are valid, there exists a possibility to improve the whole ABC optimization algorithm by + using the simpler algorithm of Cooper, Harvey and Kennedy. The algorithm and especially its proof + are also much simpler to understand and therefore to implement. \par + One fact must be kept in mind. To really allow drastic improvements in algorithm performance + the complete Cacao Java Virtual Machine architecture must be reconsidered and this demanding + task can only be done in an iterative way over a long period of time. + +%% Conclusion ----------------------------------------------------------------- + +\section{Conclusion} +The document presented general information about the topic of ABCs overall and also preserved +historical aspects that show up the evolvement. The main part dealed with ABC optimization in the +Cacao Java VM. All things considered the task of successful array bounds check optimization is a complicated and +challenging quest with many facets and relations. A lot of details must be considered in design and +also implementation phases. But it is definitly a problem that must be solved satisfactorily for all +future developments in computer languages and informatics in general because it is simply not bearable +to waste computing time for any superfluous calculations for any reasons. The ABC optimization within loops in +the Cacao Java Virtual Machine is very conservative due to its environment and also for historical reasons. +So we are definitly far away from the perfect solution but the algorithm proved to be useful in some cases +(programs) and this is the first step towards evolvement of a practical really satisfying result. + +%% Bibliography --------------------------------------------------------------- + +\bibliographystyle{alpha} +\bibliography{bibfile} + +\end{multicols} + +\end{document} + +%% End ------------------------------------------------------------------------ diff --git a/doc/abc_optimization/bibfile.bib b/doc/abc_optimization/bibfile.bib new file mode 100644 index 000000000..15bbf481d --- /dev/null +++ b/doc/abc_optimization/bibfile.bib @@ -0,0 +1,854 @@ +# General books and articles that cover the topic over all and are necessary for understanding the whole problem. +############################################################################### + +@Manual{JavaVMSpecification1.0:1995, + title = "The {Java} Virtual Machine Specification", + month = aug, + added-by = "sti", + URL = "http://java.sun.com/doc/vmspec/VMSpec.ps", + keywords = "java, virtual machine, bytecode", + edition = "1.0 Beta", + annote = "a virtual machine similar to UCSD p-code or smalltalk. + stack-machine. dynamic loading. direct support for + object orientation (e.g. virtual method calls)", + organization = "Sun Microsystems", + year = "1995", +} + +@Book{JavaLanguageSpecificationv3.0:2005, + author = "James Gosling and Bill Joy and Guy Steele and Gilad + Bracha", + address = "Boston, Mass.", + title = "The {Java} Language Specification, Third Edition", + publisher = "Addison-Wesley", + series = "The Java Series", + year = "2005", +} + +@Article{CompilingJavaJustInTime:1997, + author = "Timothy Cramer and Richard Friedman and Terrence + Miller and David Seherger and Robert Wilson and Mario + Wolczko", + title = "Compiling {Java} Just in Time: Using runtime + compilation to improve {Java} program performance", + journal = "IEEE Micro", + volume = "17", + number = "3", + pages = "36--??", + month = may # "\slash " # jun, + year = "1997", + CODEN = "IEMIDZ", + ISSN = "0272-1732", + bibdate = "Tue Aug 12 12:35:06 MDT 1997", + URL = "http://pascal.computer.org/mi/books/mi1997/pdf/m3036.pdf", + acknowledgement = "Nelson H. F. Beebe, University of Utah, Department + of Mathematics, 110 LCB, 155 S 1400 E RM 233, Salt Lake + City, UT 84112-0090, USA, Tel: +1 801 581 5254, FAX: +1 + 801 581 4148, e-mail: \path|beebe@math.utah.edu|, + \path|beebe@acm.org|, \path|beebe@computer.org| + (Internet), URL: + \path|http://www.math.utah.edu/~beebe/|", +} + +@InProceedings{FastEffectiveCodeGenerationInAJustInTimeJavaCompiler:1998, + title = "Fast, Effective Code Generation in a Just-In-Time Java + Compiler", + author = "Ali-Reza Adl-Tabatabai and Michal Cierniak and + Guei-Yuan Lueh and Vishesh M. Parikh and James M. + Stichnoth", + year = "1998", + bibdate = "2010-01-10", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/pldi/pldi98.html#Adl-TabatabaiCLPS98", + booktitle = "PLDI", + pages = "280--290", + URL = "http://doi.acm.org/10.1145/277650.277740", +} + +@InProceedings{TheDesignAndImplementationOfACertifyingCompiler:1998, + author = "G. Necula and P. Lee", + title = "The {D}esign and {I}mplementation of a {C}ertifying + {C}ompiler", + booktitle = "Proc. of PLDI'98", + year = "1998", + publisher = "ACM Press", +} + +@InProceedings{DesignImplementationAndEvaluationOfOptimizationsInAJavaJustInTimeCompiler:1999, + title = "Design, Implementation, and Evaluation of + Optimizations in a Just-in-Time Compiler", + author = "Kazuaki Ishizaki and Motohiro Kawahito and Toshiaki + Yasue and Mikio Takeuchi and Takeshi Ogasawara and + Toshio Suganuma and Tamiya Onodera and Hideaki Komatsu + and Toshio Nakatani", + year = "1999", + bibdate = "2002-12-17", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/java/java1999.html#IshizakiKYTOSOKN99", + booktitle = "Java Grande", + pages = "119--128", + URL = "http://doi.acm.org/10.1145/304065.304111", +} + +@Article{CacaoA64BitJavaVMJustInTimeCompiler:1997, + author = "Andreas Krall and Reinhard Grafl", + title = "{CACAO} -- {A} 64-bit Java{VM} Just-in-Time Compiler", + journal = "Concurrency: Practice and Experience", + volume = "9", + number = "11", + pages = "1017--1030", + month = nov, + year = "1997", + keywords = "computational science \& engineering Java: simulation + \& modeling,", +} + +@book{HandbookForJavaCacaoVirtualMachine:1996, + title= {Handbook for Java Cacao Virtual Machine}, + author= {Andreas Krall and Reinhard Grafl et al.}, + publisher= {http://www.cacaovm.org}, + year= {1996}, + pages= {116} +} + +@Article{MarmotAnOptimizingCompilerForJava:2000, + title = "Marmot: an optimizing compiler for Java", + author = "Robert P. Fitzgerald and Todd B. Knoblock and Erik Ruf + and Bjarne Steensgaard and David Tarditi", + journal = "Softw, Pract. Exper", + year = "2000", + number = "3", + volume = "30", + bibdate = "2003-11-25", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/journals/spe/spe30.html#FitzgeraldKRST00", + pages = "199--232", +} + +@InProceedings{VerifiableAnnotationsForEmbeddedJavaEnvironments:2005, + title = "Verifiable annotations for embedded java + environments", + author = "Guangyu Chen and Mahmut T. Kandemir", + bibdate = "2006-02-15", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/cases/cases2005.html#ChenK05", + booktitle = "CASES", + booktitle = "Proceedings of the 2005 International Conference on + Compilers, Architecture, and Synthesis for Embedded + Systems, {CASES} 2005, San Francisco, California, + {USA}, September 24-27, 2005", + publisher = "ACM", + year = "2005", + editor = "Thomas M. Conte and Paolo Faraboschi and William H. + Mangione-Smith and Walid A. Najjar", + ISBN = "1-59593-149-X", + pages = "105--114", + URL = "http://doi.acm.org/10.1145/1086297.1086312", +} + +@Book{ModernCompilerImplementationInC:1998, + author = "Andrew W. Appel", + title = "Modern Compiler Implementation in {C}", + publisher = "Cambridge University Press", + ISBN = "0-521-58390-X", + year = "1998", +} + +############################################################################### +# Now some references are presented that deal with intermediate representations especially with several SSA forms. +############################################################################### + +@Article{EfficientlyComputingSSAFormAndTheControlDependenceGraph:1991, + author = "Ron Cytron and Jeanne Ferrante and Barry K. Rosen and + Mark N. Wegman and F. Kenneth Zadeck", + title = "Efficiently computing static single assignment form + and the control dependence graph", + number = "4", + volume = "13", + month = oct, + added-by = "msteiner", + URL = "http://doi.acm.org/10.1145/115372.115320", + added-at = "Sat Jul 29 09:22:31 2006", + annote = "Classical paper on the SSA (Static Single Assignment) + form for intermediary code representation in + (optimizing) compilers. See + \url{http://www.cs.man.ac.uk/~jsinger/ssa.html} for + bibliography of SSA related work.", + pages = "451--490", + journal = "TOPLAS", + year = "1991", +} + +@Article{AVerifiableSSAProgramRepresentationForAgressiveCompilerOptimization:2006, + author = "Menon and Glew and Murphy and McCreight and Shpeisman + and Adl-Tabatabai and Petersen", + title = "A Verifiable {SSA} Program Representation for + Aggressive Compiler Optimization", + journal = "SPNOTICES: ACM SIGPLAN Notices", + volume = "41", + year = "2006", +} + +@Article{SafeTSAATypeSafeAndReferentiallySecureMobileCodeRepresentationBasedOnStaticSingleAssignmentForm:2001, + author = "Wolfram Amme and Niall Dalton and Jeffery von Ronne + and Michael Franz", + title = "{SafeTSA}: {A} Type Safe and Referentially Secure + Mobile-Code Representation Based on Static Single + Assignment Form", + journal = "ACM SIG{\-}PLAN Notices", + volume = "36", + number = "5", + pages = "137--147", + month = may, + year = "2001", + CODEN = "SINODQ", + ISSN = "0362-1340", + bibdate = "Sun Dec 14 09:18:26 MST 2003", + bibsource = "http://www.acm.org/sigplan/pldi/pldi2001/pldi_program.html; + http://portal.acm.org/", + acknowledgement = "Nelson H. F. Beebe, University of Utah, Department + of Mathematics, 110 LCB, 155 S 1400 E RM 233, Salt Lake + City, UT 84112-0090, USA, Tel: +1 801 581 5254, FAX: +1 + 801 581 4148, e-mail: \path|beebe@math.utah.edu|, + \path|beebe@acm.org|, \path|beebe@computer.org| + (Internet), URL: + \path|http://www.math.utah.edu/~beebe/|", +} + +@Misc{UsingTheSafeTSARepresentationToBoostThePerformanceOfAnExistingJavaVirtualMachine:2002, + title = "Using the Safe{TSA} Representation to Boost the + Performance of an Existing Java Virtual Machine", + author = "Wolfram Amme and Jeffery Von Ronne and Michael Franz", + year = "2003", + month = dec # "~07", + abstract = "High-performance just-in-time compilers for Java need + to invest considerable effort before actual code + generation can commence. SafeTSA, a typed intermediate + representation based on SSA form, was designed to ease + this burden, decreasing the time required for dynamic + compilation, without sacrificing safety or code + quality.", + citeseer-references = "oai:CiteSeerPSU:2678; oai:CiteSeerPSU:717751; + oai:CiteSeerPSU:284691; oai:CiteSeerPSU:554814; + oai:CiteSeerPSU:114398; oai:CiteSeerPSU:328282; + oai:CiteSeerPSU:89224", + annote = "Wolfram Amme (Friedrich-Schiller-Universit at Jena; + Institut f ur Informatik; Ernst-Abbe-Platz 1-4; D-07743 + Jena , Germany); Jeffery Von Ronne (Information and + Computer Science; 127B Computer Science Trailer); + Michael Franz (Information and Computer Science; 444 + Computer Science Bldg);", + bibsource = "OAI-PMH server at cs1.ist.psu.edu", + language = "en", + oai = "oai:CiteSeerPSU:599324", + rights = "unrestricted", + URL = "http://citeseer.ist.psu.edu/599324.html; + http://www.ics.uci.edu/~jronne/pubs/cpc2003-safetsa5.pdf", +} + +############################################################################### +# The following documents now deal with the special topic of array bound check optimization. +# The order is chronologic from the real beginning to the most recent publications. +############################################################################### + +@Article{CompilerAnalysisOfTheValueRangesForVariables:1977, + author = "W. Harrison", + title = "Compiler analysis of the value ranges for variables", + journal = "IEEE Transactions on Software Engineering", + volume = "SE-3", + number = "3", + month = may, + year = "1977", + keywords = "data flow analysis,", +} + +@InProceedings{ImplementationOfAnArrayBoundChecker:1977, + title = "Implementation of an Array Bound Checker", + author = "Norihisa Suzuki and Kiyoshi Ishihata", + year = "1977", + bibdate = "2010-01-28", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/popl/popl77.html#SuzukiI77", + booktitle = "POPL", + cdrom = "POPL/00001437.PDF", + pages = "132--143", + URL = "http://doi.acm.org/10.1145/512950.512963", +} + +@Article{EconomicRangeChecksInPascal:1978, + author = "J. Welsh", + title = "Economic range checks in {Pascal}", + journal = "Software -- Practice \& Experience", + volume = "8", + pages = "85--97", + year = "1978", +} + +@Article{GlobalOptimizationBySuppressionOfPartialRedundancies:1979, + author = "E. Morel and C. Renvoise", + title = "Global optimization by suppression of partial + redundancies", + journal = "Communications of the ACM", + volume = "22", + number = "2", + pages = "96--103", + month = feb, + year = "1979", +} + +@InProceedings{OptimizationOfRangeChecking:1982, + author = "Victoria Markstein and John Cocke and Peter + Markstein", + title = "Optimization of Range Checking", + booktitle = "Proceedings of the {SIGPLAN} '82 Symposium on Compiler + Construction", + address = "Boston, Massachusetts", + month = jun # " 23--25,", + year = "1982", + pages = "114--119", +} + +@Article{OptimizationOfRangeChecking:1987, + author = "Victoria Markstein and Peter Markstein and John + Cocke", + title = "Optimization of range checking", + journal = "ACM SIG{\-}PLAN Notices", + volume = "39", + number = "4", + pages = "58--65", + month = apr, + year = "2004", + CODEN = "SINODQ", + ISSN = "0362-1340", + bibdate = "Tue Apr 12 09:38:13 MDT 2005", + bibsource = "http://portal.acm.org/", + acknowledgement = "Nelson H. F. Beebe, University of Utah, Department + of Mathematics, 110 LCB, 155 S 1400 E RM 233, Salt Lake + City, UT 84112-0090, USA, Tel: +1 801 581 5254, FAX: +1 + 801 581 4148, e-mail: \path|beebe@math.utah.edu|, + \path|beebe@acm.org|, \path|beebe@computer.org| + (Internet), URL: + \path|http://www.math.utah.edu/~beebe/|", +} + +@Article{AFreshLookAtOptimizingArrayBoundChecking:1990, + author = "Rajiv Gupta", + title = "A fresh look at optimizing array bound checking", + journal = "ACM SIG{\-}PLAN Notices", + volume = "25", + number = "6", + pages = "272--282", + month = jun, + year = "1990", + CODEN = "SINODQ", + ISSN = "0362-1340", + bibdate = "Sun Dec 14 09:15:53 MST 2003", + bibsource = "http://portal.acm.org/", + acknowledgement = "Nelson H. F. Beebe, University of Utah, Department + of Mathematics, 110 LCB, 155 S 1400 E RM 233, Salt Lake + City, UT 84112-0090, USA, Tel: +1 801 581 5254, FAX: +1 + 801 581 4148, e-mail: \path|beebe@math.utah.edu|, + \path|beebe@acm.org|, \path|beebe@computer.org| + (Internet), URL: + \path|http://www.math.utah.edu/~beebe/|", +} + +@Article{OptimizationOfArraySubscriptRangeChecks:1992, + title = "Optimization of Array Subscript Range Checks", + author = "Johnathan M. Asuru", + journal = "ACM Letters on Programming Languages and Systems", + pages = "109--118", + month = jun, + year = "1992", + volume = "1", + number = "2", +} + +@Article{OptimizingArrayBoundChecksUsingFlowAnalysis:1993, + title = "Optimizing Array Bound Checks Using Flow Analysis", + author = "Rajiv Gupta", + journal = "LOPLAS", + year = "1993", + number = "1-4", + volume = "2", + bibdate = "2002-12-02", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/journals/loplas/loplas2.html#Gupta93", + pages = "135--150", + URL = "http://doi.acm.org/10.1145/176454.176507", +} + +@InProceedings{EliminationOfRedundantArraySubscriptRangeChecks:1995, + title = "Elimination of Redundant Array Subscript Range + Checks", + author = "Priyadarshan Kolte and Michael Wolfe", + year = "1995", + bibdate = "2010-01-10", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/pldi/pldi95.html#KolteW95", + booktitle = "PLDI", + pages = "270--278", + URL = "http://doi.acm.org/10.1145/207110.207160", +} + +@InProceedings{EliminatingArrayBoundCheckingThroughDependentTypes:1998, + title = "Eliminating Array Bound Checking Through Dependent + Types", + author = "Hongwei Xi and Frank Pfenning", + year = "1998", + bibdate = "2010-01-10", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/pldi/pldi98.html#XiP98", + booktitle = "PLDI", + pages = "249--257", + URL = "http://doi.acm.org/10.1145/277650.277732", +} + +@Article{OptimizingArrayReferenceCheckingInJavaPrograms:1998, + title = "Optimizing Array Reference Checking in Java Programs", + author = "Samuel P. Midkiff and Jos{\'e} E. Moreira and Marc + Snir", + journal = "IBM Systems Journal", + year = "1998", + number = "3", + volume = "37", + bibdate = "2002-01-03", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/journals/ibmsj/ibmsj37.html#MidkiffMS98", + pages = "409--453", +} + +@InProceedings{TowardsArrayBoundCheckEliminationInJavaTMVirtualMachineLanguage:1999, + title = "Towards array bound check elimination in Java $^{TM}$ + virtual machine language", + author = "Hongwei Xi and Songtao Xia", + bibdate = "2006-02-15", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/cascon/cascon1999.html#XiX99", + booktitle = "CASCON", + booktitle = "Proceedings of the 1999 conference of the Centre for + Advanced Studies on Collaborative Research, November + 8-11, 1999, Mississauga, Ontario, Canada", + publisher = "IBM", + year = "1999", + editor = "Stephen A. MacKay and J. Howard Johnson", + pages = "14", + URL = "http://doi.acm.org/10.1145/781995.782009", +} + +@InProceedings{ABCDEliminatingArrayBoundsChecksOnDemand:2000, + author = "Rastislav Bodik and Rajiv Gupta and Vivek Sarkar", + title = "{ABCD}: eliminating array bounds checks on demand", + booktitle = "Proceedings of the ACM SIGPLAN 2000 Conference on + Programming Language Design and Implementation", + year = "2000", + pages = "321--333", + abstract = "similar to SSI, has phi functions, and also pi + functions to rename at conditional branches and after + array bounds checks", + URL = "http://doi.acm.org/10.1145/349299.349342", +} + +@book{ProcessorWithAcceleratedArrayAcessBoundsChecking:2000, + title= {USPatent: Processor with Accelerated Array Acess Bounds Checking}, + author= {Marc Tremblay and James M. O'Connor and William N. Joy}, + publisher= {Sun Microsystems, Inc.}, + year= {2000}, + pages= {31} +} + +@InProceedings{AFrameworkForOptimizingJavaUsingAttributes:2000, + title = "A framework for optimizing Java using attributes", + author = "Patrice Pominville and Feng Qian and Raja + Vall{\'e}e-Rai and Laurie J. Hendren and Clark + Verbrugge", + bibdate = "2006-02-15", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/cascon/cascon2000.html#PominvilleQVHV00", + booktitle = "CASCON", + booktitle = "Proceedings of the 2000 conference of the Centre for + Advanced Studies on Collaborative Research, November + 13-16, 2000, Mississauga, Ontario, Canada", + publisher = "IBM", + year = "2000", + editor = "Stephen A. MacKay and J. Howard Johnson", + pages = "8", + URL = "http://doi.acm.org/10.1145/782034.782042", +} + +@book{ApparatusAndMethodForArrayBoundsCheckingWithAShadowFile:2001, + title= {USPatent: Apparatus and Method for Array Bounds Checking with a Shadow File}, + author= {Gautam Dewan}, + publisher= {Sun Microsystems, Inc.}, + address= {}, + year= {2001}, + volume= {6}, + pages= {} +} + +@InProceedings{DerivingPreconditionsForArrayBoundCheckElimination:2001, + title = "Deriving Pre-Conditions for Array Bound Check + Elimination", + author = "Wei-Ngan Chin and Siau-Cheng Khoo and Dana N. Xu", + year = "2000", + bibdate = "2004-06-01", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/aplas/aplas2000.html#ChinKX00", + booktitle = "APLAS", + pages = "9--21", +} + +@InProceedings{AComparisonOfArrayBoundsCheckingOnSuperscalarAndVLIWArchitectures:2002, + author = {Chris Bentley and Scott A. Watterson and David K. Lowenthal}, + title = {A Comparison of Array Bounds Checking on Superscalar and VLIW Architectures}, + booktitle = {IEEE Workshop on Workload Characterization}, + year = {2002} +} + +@Article{AComprehensiveApproachToArrayBoundsCheckEliminationForJava:2002, + author = "Feng Qian and Laurie Hendren and Clark Verbrugge", + title = "A Comprehensive Approach to Array Bounds Check + Elimination for {Java}", + journal = "Lecture Notes in Computer Science", + volume = "2304", + pages = "325--??", + year = "2002", + CODEN = "LNCSD9", + ISSN = "0302-9743", + bibdate = "Tue Sep 10 19:09:22 MDT 2002", + bibsource = "http://link.springer-ny.com/link/service/series/0558/tocs/t2304.htm", + URL = "http://link.springer-ny.com/link/service/series/0558/bibs/2304/23040325.htm; + http://link.springer-ny.com/link/service/series/0558/papers/2304/23040325.pdf", + acknowledgement = "Nelson H. F. Beebe, Center for Scientific + Computing, University of Utah, Department of + Mathematics, 110 LCB, 155 S 1400 E RM 233, Salt Lake + City, UT 84112-0090, USA, Tel: +1 801 581 5254, FAX: +1 + 801 581 4148, e-mail: \path|beebe@math.utah.edu|, + \path|beebe@acm.org|, \path|beebe@computer.org|, + \path|beebe@ieee.org| (Internet), URL: + \path|http://www.math.utah.edu/~beebe/|", +} + +@InProceedings{EliminationOfJavaArrayBoundsChecksInThePresenceOfIndirection:2002, + author = "Mikel Luj{\'a}n and John R. Gurd and T. L. Freeman and + Jos{\'e} Migue", + title = "Elimination of Java array bounds checks in the + presence of indirection", + pages = "76--85", + booktitle = "Proceedings of the 2002 joint {ACM}-{ISCOPE} + conference on Java Grande ({JGI}-02)", + month = nov # " ~3--5", + publisher = "ACM Press", + address = "New York", + year = "2002", +} + +@book{MethodForOptimizingArrayBoundsChecksInPrograms:2002, + title= {USPatent: Method for Optimizing Array Bounds Checks in Programs}, + author= {Manish Gupta and Samuel Pratt Midkiff and Jose Eduardo Moreira}, + publisher= {IBM Corporation}, + year= {2002}, + pages= {58} +} + +@Article{ArrayBoundsCheckEliminationUtilizingAPageProtectionMechanism:2003, + title= {Array Bounds Check Elimination Utilizing a Page Protection Mechanism}, + author= {Motohiro Kawahito}, + publisher= {IBM Research, Tokyo Research Laboratory}, + journal= {RT0550 Computer Science}, + address= {1623-14, Shimotsuruma, Yamato, Kanagawa, 242-8502, Japan}, + year= {2003}, + pages= {6} +} + +@Article{EfficientAndEffectiveArrayBoundChecking:2005, + author = "Thi Viet Nga Nguyen and Fran{\c{c}}ois Irigoin", + title = "Efficient and effective array bound checking", + journal = "ACM Transactions on Programming Languages and + Systems", + volume = "27", + number = "3", + pages = "527--570", + month = may, + year = "2005", + CODEN = "ATPSDT", + ISSN = "0164-0925", + bibdate = "Thu Jul 7 12:37:29 MDT 2005", + bibsource = "http://www.acm.org/pubs/contents/journals/toplas/", + acknowledgement = "Nelson H. F. Beebe, University of Utah, Department + of Mathematics, 110 LCB, 155 S 1400 E RM 233, Salt Lake + City, UT 84112-0090, USA, Tel: +1 801 581 5254, FAX: +1 + 801 581 4148, e-mail: \path|beebe@math.utah.edu|, + \path|beebe@acm.org|, \path|beebe@computer.org| + (Internet), URL: + \path|http://www.math.utah.edu/~beebe/|", + fjournal = "ACM Transactions on Programming Languages and + Systems", +} + +@Article{SymbolicBoundsAnalysisOfPointersArrayIndicesAndAccessedMemoryRegions:2005, + author = "Rugina and Rinard", + title = "Symbolic Bounds Analysis of Pointers, Array Indices, + and Accessed Memory Regions", + journal = "ACMTOPLAS: ACM Transactions on Programming Languages + and Systems", + volume = "27", + year = "2005", +} + +@InProceedings{ArrayBoundsCheckEliminationForTheJavaHotSpotTMClientCompiler:2007, + title = "Array bounds check elimination for the Java + HotSpot{\texttrademark} client compiler", + author = "Thomas W{\"u}rthinger and Christian Wimmer and + Hanspeter M{\"o}ssenb{\"o}ck", + bibdate = "2007-10-22", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/pppj/pppj2007.html#WurthingerWM07", + booktitle = "PPPJ", + booktitle = "Proceedings of the 5th International Symposium on + Principles and Practice of Programming in Java, {PPPJ} + 2007, Lisboa, Portugal, September 5-7, 2007", + publisher = "ACM", + year = "2007", + volume = "272", + editor = "Vasco Amaral and Luis Marcelino and Lu{\'i}s Veiga and + H. Conrad Cunningham", + ISBN = "978-1-59593-672-1", + pages = "125--133", + series = "ACM International Conference Proceeding Series", + URL = "http://doi.acm.org/10.1145/1294325.1294343", +} + +@Misc{VerifiableRangeAnalysisAnnotationsForArrayBoundsCheckElimination:2007, + author = {Jeffery Von Ronne and Kleanthis Psarris and David Niedzielski}, + title = {Verifiable Range Analysis Annotations for Array Bounds Check Elimination}, + year = {} +} + +@InProceedings{APracticalAndPreciseInferenceAndSpecializerForArrayBoundChecksElimination:2008, + title = "A practical and precise inference and specializer for + array bound checks elimination", + author = "Corneliu Popeea and Dana N. Xu and Wei-Ngan Chin", + bibdate = "2008-04-04", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/pepm/pepm2008.html#PopeeaXC08", + booktitle = "PEPM", + booktitle = "Proceedings of the 2008 {ACM} {SIGPLAN} Symposium on + Partial Evaluation and Semantics-based Program + Manipulation, {PEPM} 2008, San Francisco, California, + {USA}, January 7-8, 2008", + publisher = "ACM", + year = "2008", + editor = "Robert Gl{\"u}ck and Oege de Moor", + ISBN = "978-1-59593-977-7", + pages = "177--187", + URL = "http://doi.acm.org/10.1145/1328408.1328434", +} + +@Misc{EfficientIntraproceduralArrayBoundChecking:2008, + title = "Efficient Intraprocedural Array Bound Checking", + author = "Thi Viet and Nga Nguyenfranc and Ois Irigoin and Ronan + Keryelllit and Enst Bretagne", + year = "2008", + month = feb # "~07", + abstract = "Array bound checking is critical for code safety and + debugging but users are not ready to trade much + execution time for it. A considerable research work has + been carried out during the past 25 years but + experimental results are scarce. Commercial + implementations are limited to intraprocedural array + bound checking and are not really fulfilling user + expectations fot compilation and execution times. + Instead of designing a new specific algorithm, we + implemented two algorithms representative of the main + published approaches by re-using automatic + parallelization techniques available in PIPS, an + interprocedural parallelizer. The first algorithm is + based on redundant bound checking elimination. The + second one is based on insertion of unavoidable tests. + Results with the SPEC CFP95 benchmarks show that + commercial products could easily be improved using + automatic parallelization techniques and that user + expectations can be fulfilled for most benchmarks. + However, additional techniques would have be used to + obtain excellent results for all benchmarks. Our + approach to optimize bound checking can also be applied + to other imperative languages such as Fortran, Pascal, + Java when used for scientific applications.", + bibsource = "OAI-PMH server at citeseerx.ist.psu.edu", + contributor = "CiteSeerX", + language = "en", + oai = "oai:CiteSeerXPSU:10.1.1.80.9869", + relation = "10.1.1.19.5510; 10.1.1.36.3970; 10.1.1.33.6934; + 10.1.1.127.8252; 10.1.1.121.8818; 10.1.1.44.8796; + 10.1.1.53.9852; 10.1.1.141.4211; 10.1.1.85.5471; + 10.1.1.34.2234; 10.1.1.41.326; 10.1.1.18.3004; + 10.1.1.30.1142; 10.1.1.34.7836", + rights = "Metadata may be used without restrictions as long as + the oai identifier remains attached to it.", + subject = "array bound checking; range checking; subscript out of + range; bound violation; program verification", + URL = "http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.80.9869; + http://www.cri.ensmp.fr/classement/doc/A-316.ps", +} + +@Article{SafeBoundsCheckAnnotations:2008, + title = "Safe bounds check annotations", + author = "Jeffery von Ronne and Andreas Gampe and David + Niedzielski and Kleanthis Psarris", + journal = "Concurrency and Computation: Practice and Experience", + year = "2009", + number = "1", + volume = "21", + bibdate = "2009-05-07", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/journals/concurrency/concurrency21.html#RonneGNP09", + pages = "41--57", + URL = "http://dx.doi.org/10.1002/cpe.1341", +} + +@InProceedings{SpeculativeImprovementsToVerfifiableBoundsCheckElimination:2008, + title = "Speculative improvements to verifiable bounds check + elimination", + author = "Andreas Gampe and Jeffery von Ronne and David + Niedzielski and Kleanthis Psarris", + bibdate = "2008-09-23", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/pppj/pppj2008.html#GampeRNP08", + booktitle = "PPPJ", + booktitle = "Proceedings of the 6th International Symposium on + Principles and Practice of Programming in Java, {PPPJ} + 2008, Modena, Italy, September 9-11, 2008", + publisher = "ACM", + year = "2008", + volume = "347", + editor = "Lu{\'i}s Veiga and Vasco Amaral and R. Nigel Horspool + and Giacomo Cabri", + ISBN = "978-1-60558-223-8", + pages = "85--94", + series = "ACM International Conference Proceeding Series", + URL = "http://doi.acm.org/10.1145/1411732.1411745", +} + +@InProceedings{ArrayBoundsCheckEliminationForJavaBasedOnSparseRepresentation:2009, + title = "Array Bounds Check Elimination for Java Based on + Sparse Representation", + author = "Keqiao Yang and Zeng Huang and Min Yang", + bibdate = "2010-04-23", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/sera/sera2009.html#YangHY09", + booktitle = "SERA", + booktitle = "Proceedings of the Seventh {ACIS} International + Conference on Software Engineering Research, Management + and Applications, {SERA} 2009, Haikou, China, 2-4 + December 2009", + publisher = "IEEE Computer Society", + year = "2009", + editor = "Roger Y. Lee and Wencai Du and Haeng-Kon Kim and + Shaochun Xu", + ISBN = "978-0-7695-3903-4", + pages = "189--196", + URL = "http://doi.ieeecomputersociety.org/10.1109/SERA.2009.11", +} + +@Article{ArrayBoundsCheckEliminationInTheContextOfDeoptimization:2009, + title = "Array bounds check elimination in the context of + deoptimization", + author = "Thomas W{\"u}rthinger and Christian Wimmer and + Hanspeter M{\"o}ssenb{\"o}ck", + journal = "Sci. Comput. Program", + year = "2009", + number = "5-6", + volume = "74", + bibdate = "2009-06-15", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/journals/scp/scp74.html#WurthingerWM09", + pages = "279--295", + URL = "http://dx.doi.org/10.1016/j.scico.2009.01.002", +} + +@InProceedings{AVerifiableControlFlowAwareConstraintAnalyzerForBoundsCheckElimination:2009, + title = "A Verifiable, Control Flow Aware Constraint Analyzer + for Bounds Check Elimination", + author = "David Niedzielski and Jeffery von Ronne and Andreas + Gampe and Kleanthis Psarris", + bibdate = "2009-08-20", + bibsource = "DBLP, + http://dblp.uni-trier.de/db/conf/sas/sas2009.html#NiedzielskiRGP09", + booktitle = "SAS", + booktitle = "Static Analysis, 16th International Symposium, {SAS} + 2009, Los Angeles, {CA}, {USA}, August 9-11, 2009. + Proceedings", + publisher = "Springer", + year = "2009", + volume = "5673", + editor = "Jens Palsberg and Zhendong Su", + ISBN = "978-3-642-03236-3", + pages = "137--153", + series = "Lecture Notes in Computer Science", + URL = "http://dx.doi.org/10.1007/978-3-642-03237-0", +} + +@Misc{SafeMultiphaseBondsCheckEliminationInJava:2010, + author = {Andreas Gampe and David Niedzielski and Jeffery Von Ronne and Kleanthis Psarris}, + title = {Safe, Multiphase Bounds Check Elimination in Java}, + year = {2010} +} + +############################################################################### +# Documents that cover important auxiliary information! +############################################################################### + +@Article{AFastAlgorithmForFindingDominatorsInAFlowGraph:1979, + author = "Thomas Lengauer and Robert E. Tarjan", + title = "A Fast Algorithm for Finding Dominators in a Flow + Graph", + journal = "ACM Transactions on Programming Languages and + Systems", + volume = "1", + number = "1", + pages = "121--141", + month = jul, + year = "1979", + reffrom = "Touzeau:acm:cc:1984", +} + +@Misc{ASImpleFastDominanceAlgorithm:2001, + title = "A Simple, Fast Dominance Algorithm", + author = "Keith D. Cooper and Timothy J. Harvey and Ken + Kennedy", + year = "2001", + month = aug # "~12", + abstract = "This paper returns to a simple formulation of + dominance as a global data-flow problem. Some insights + into the nature of dominance lead to an implementation + of an O(N ) algorithm that runs faster, in practice, + than the classic Lengauer-Tarjan algorithm, which has a + timebound of O(E log(N)).Wecompare the algorithm to + Lengauer-Tarjan because it is the best known and most + widely used of the fast algorithms for dominance. + Working from the same implementation insights, we also + rederive (from earlier work on control dependence by + Ferrante, et al.)amethod Contract/grant sponsor: This + research was supported, in part, by Darpa through + Usafrl contract F30602-97-2-298, and the State of Texas + through its Advanced Technology Program, grant number + 3604-0122-1999", + citeseer-references = "oai:CiteSeerPSU:472597; oai:CiteSeerPSU:87236; + oai:CiteSeerPSU:87236; oai:CiteSeerPSU:106151; + oai:CiteSeerPSU:54768", + annote = "Keith D. Cooper (Rice University; Houston , TX); Ken + Kennedy (SUMMARY);", + bibsource = "OAI-PMH server at cs1.ist.psu.edu", + language = "en", + oai = "oai:CiteSeerPSU:573270", + rights = "unrestricted", + URL = "http://citeseer.ist.psu.edu/573270.html; + http://www.cs.rice.edu/~keith/EMBED/dom.pdf", +} + + + +############################################################################### +# End +############################################################################### diff --git a/doc/abc_optimization/gwa.sty b/doc/abc_optimization/gwa.sty new file mode 100644 index 000000000..e5d892c15 --- /dev/null +++ b/doc/abc_optimization/gwa.sty @@ -0,0 +1,156 @@ +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +% ------------------------------------------------------------------------ +% Grundlagen Wissenschaftlichen Arbeitens - LaTeX Style File +% ------------------------------------------------------------------------ +% File name: gwa.sty +% Author: Wilfried Elmenreich +% Modifications: Raimund Kirner (13.03.2006) +% Revision: 1.1 +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +% +%----------------------------------------------------------------- ID --- +% + +\NeedsTeXFormat{LaTeX2e} \ProvidesPackage{gwa} + +% +%------------------------------------------------------------ Packages -- +% + +\RequirePackage{times} +\RequirePackage{a4} + +% +%--------------------------------------------------------- Page format -- +% + +\setlength{\topmargin}{-1.2cm} +\setlength{\textwidth}{16.5cm} +\setlength{\textheight}{23cm} +\setlength{\oddsidemargin}{0.3cm} +\setlength{\evensidemargin}{-0.6cm} +\setlength{\parindent}{3.5mm} +\setlength{\columnsep}{6mm} +\setlength{\hoffset}{0cm} + +\def\kompakt{\setlength{\baselineskip}{.93\baselineskip}\setlength{\columnsep}{5.5mm}} + +\renewcommand{\floatpagefraction}{0.9} +\renewcommand{\textfraction}{0.05} +\renewcommand{\topfraction}{1.0} +\renewcommand{\bottomfraction}{1.0} + +%\DeclareOption{latex}{ +%\setlength{\oddsidemargin}{1.1cm} +%\setlength{\evensidemargin}{0.1cm}} +% +%\DeclareOption{pdflatex}{ +%\setlength{\oddsidemargin}{0.5cm} +%\setlength{\evensidemargin}{-0.5cm}} + +%\ProcessOptions + +% +%----------------------------------------- Caption and list formatting -- +% + +\def\section{\@startsection{section}{2}{0cm}{-1.5ex plus -.1ex minus + -.2ex}{1.5ex}{\large\sc}} +\def\subsection{\@startsection{subsection}{2}{0cm}{-1.5ex plus -.1ex minus + -.2ex}{1.5ex}{\normalsize\sc}} + \def\subsubsection{\@startsection{subsubsection}{2}{0cm}{-1.5ex plus -.1ex minus + -.2ex}{1.5ex}{\normalsize\sc}} +%\renewcommand \thesection {\@Roman\c@section} +%\renewcommand\thesubsection {\@Alph\c@subsection} + +\renewcommand\labelitemi{\normalfont\bfseries --} +\renewcommand\labelitemii{$\m@th\bullet$} + +% +%----------------------------------------- Title and author formatting -- +% + +\def\affiliations#1{\gdef\@affiliations{#1}} +\def\inst#1{\unskip$^{#1}$} + +\def\authN#1{{\large#1}} +\def\matNr#1{{\normalsize\/Matr.Nr. #1}} +\def\kennZ#1{{\normalsize\/Kennzahl #1}} +\def\authI#1{{\normalsize#1}} +\def\authU#1{{\normalsize#1}} +\def\email#1{{\normalsize#1}} + +\makeatletter + +\makeatletter + +\def\@maketitle{% + \newpage + \null + \vskip 4em% + \begin{center}% + \let \footnote \thanks + {\vskip -6em \LARGE \@title \par}% + \vskip 1.5em% + {\large + \lineskip .5em% + \begin{tabular}[t]{c}% + \@author + \end{tabular}\par}% + \vskip 1em% + {\normalsize \@date}% + \end{center}% + \par + \vskip 1em +} + +%\bibliographystyle{unsrt} +\pagestyle{plain} + +% +%---------------------------------------------------- Abstract command -- +% + +\newenvironment{absenv}{% + \list{}{\advance\topsep by0.35cm + \relax\small + \leftmargin=0cm + \labelwidth=\z@ + \listparindent=0cm + \itemindent=0cm + \rightmargin\leftmargin + }\item[\hskip\labelsep + \bfseries Abstract] --- \em \setlength{\parskip}{0pt}} + +\renewcommand{\abstract}[1]{ + \begin{absenv} + #1 + \end{absenv} + \vskip 1.5em + \thispagestyle{plain}} + +% +%---------------------------------------------------- Abstract command -- +% + +\renewenvironment{thebibliography}[1] + {\section*{\refname}% + \@mkboth{\MakeUppercase\refname}{\MakeUppercase\refname}% + \list{\@biblabel{\@arabic\c@enumiv}}% + {\settowidth\labelwidth{\@biblabel{#1}}% + \setlength{\parsep}{0em} + \setlength{\itemsep}{1pt} + \leftmargin\labelwidth + \advance\leftmargin\labelsep + \@openbib@code + \usecounter{enumiv}% + \let\p@enumiv\@empty + \renewcommand\theenumiv{\@arabic\c@enumiv}}% + \sloppy + \clubpenalty4000 + \@clubpenalty \clubpenalty + \widowpenalty4000% + \sfcode`\.\@m} + +% = eof ================================================================== diff --git a/doc/abc_optimization/ngerman.sty b/doc/abc_optimization/ngerman.sty new file mode 100644 index 000000000..1fcdcb8da --- /dev/null +++ b/doc/abc_optimization/ngerman.sty @@ -0,0 +1,680 @@ +%% +%% This is file `ngerman.sty', +%% generated with the docstrip utility. +%% +%% The original source files were: +%% +%% german.dtx (with options: `new') +%% +%% This file is part of the `german' collection, +%% providing German language support for +%% plain TeX or LaTeX version 2e/2.09. +%% +%% ----------- Copyright (C) 1998, 1999 by B.Raichle ---------- +%% ------------------- All rights reserved. ------------------- +%% Maintained by Bernd Raichle (Uni Stuttgart), +%% using ideas by H.Partl (TU Wien, Uni.f.Bodenkultur Wien) +%% and many other people. +%% +%% +%% IMPORTANT NOTICE: +%% +%% This program can be redistributed and/or modified under the terms +%% of the LaTeX Project Public License Distributed from CTAN +%% archives in directory macros/latex/base/lppl.txt; either +%% version 1 of the License, or any later version. +%% +%% +%% Error Reports (in case of UNCHANGED versions) should be sent to: +%% +%% Bernd Raichle +%% +\expandafter\ifx\csname ngrm\string @VersionNo\endcsname\relax +\else + \ifnum\number\csname ngrm\string @VersionNo\endcsname<9806\relax + \else \ngermanTeX \expandafter\expandafter\expandafter\endinput + \fi\fi +\expandafter\mathchardef\csname ngrm\string @VersionNo\endcsname +=9806\relax % = v2.5e +\message{v2.5e 1998-07-08} +\begingroup\expandafter\expandafter\expandafter\endgroup +\expandafter\ifx\csname ProvidesPackage\endcsname\relax\else + \ProvidesPackage{ngerman}[1998/07/08 v2.5e % + Support for writing german texts (br)] +\fi +\chardef\atcode=\catcode`\@ +\catcode`\@=11 % \makeatletter +\expandafter\ifx\csname @ifundefined\endcsname\relax + \def\@ifundefined#1{% + \expandafter\ifx\csname #1\endcsname\relax + \expandafter\grmn@dqfirst\else\expandafter\grmn@dqsecond\fi} +\fi +\def\grmn@dqfirst#1#2{#1} +\def\grmn@dqsecond#1#2{#2} +\begingroup\expandafter\expandafter\expandafter\endgroup +\expandafter\ifx\csname DeclareTextSymbol\endcsname\relax +\@ifundefined{SS}{\def\SS{SS}}{} +\else +\begingroup\expandafter\expandafter\expandafter\endgroup +\expandafter\ifx\csname ProvideTextCommandDefault\endcsname\relax + \immediate\write17{} + \immediate\write17{% +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!} + \immediate\write17{% +!! Dies ist eine zu alte LaTeX2e-Version, die nicht} + \immediate\write17{% +!! alle fuer german/ngerman notwendigen Deklarationen} + \immediate\write17{% +!! zur Verfuegung stellt. Dennoch koennen Sie diese} + \immediate\write17{% +!! Pakete, eventuell mit kleinen Fehlern, verwenden.} + \immediate\write17{!!} + \immediate\write17{% +!! Bitte installieren Sie eine neuere LaTeX2e-Version,} + \immediate\write17{% +!! da zukuenftige Versionen der Pakete diese} + \immediate\write17{% +!! LaTeX2e-Version nicht mehr unterstuetzen werden!} + \immediate\write17{% +!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!} + \immediate\write17{} +\fi +\@ifundefined{UseTextSymbol}{% + \def\UseTextSymbol#1#2{{\fontencoding{#1}\selectfont #2}} +}{} +\@ifundefined{UseTextAccent}{% + \def\UseTextAccent#1#2#3{% + {\let\@curr@enc\f@encoding + \fontencoding{#1}\selectfont + #2{\fontencoding\@curr@enc\selectfont #3}}}% +}{} +\@ifundefined{ProvideTextCommand}{% + \def\ProvideTextCommand#1#2{% + %%% misses \RobustTextCommand{#1}{...etc.etc...}!! + \expandafter\providecommand\csname #2\string#1\endcsname}% +}{} +\@ifundefined{ProvideTextCommandDefault}{% + \def\ProvideTextCommandDefault#1{% + \ProvideTextCommand{#1}{U}}% +}{} +\@ifundefined{DeclareTextCompositeCommand}{% + \def\DeclareTextCompositeCommand#1#2#3#4{% + % define a `dummy' text composite + \DeclareTextComposite{#1}{#2}{#3}{`\Z}% + % then redefine this command including the new command + \expandafter\def\csname\expandafter\string\csname + #2\endcsname\string#1-\string#3\endcsname##1##2{#4}}% +}{} +\@ifundefined{OT1\string\ss}{% + \wlog{ngerman: Re-declaration of \string\ss\space for OT1!}% + \DeclareTextSymbol{\ss}{OT1}{25}}{} +\@ifundefined{OT1\string\i}{% + \wlog{ngerman: Re-declaration of \string\i\space for OT1!}% + \DeclareTextSymbol{\i}{OT1}{16}}{} +\ProvideTextCommandDefault{\"}{\UseTextAccent{OT1}{\"}} +\ProvideTextCommandDefault{\ss}{\UseTextSymbol{OT1}\ss} +\ProvideTextCommandDefault{\i}{\UseTextSymbol{OT1}\i} +\ProvideTextCommandDefault{\SS}{SS} +\@ifundefined{textquotedblleft}{% + \ProvideTextCommandDefault{\textquotedblleft}{% + \UseTextSymbol{OT1}\textquotedblleft}% + \DeclareTextSymbol{\textquotedblleft}{OT1}{92}% + \DeclareTextSymbol{\textquotedblleft}{T1}{16}% +}{} +\@ifundefined{textquotedblright}{% + \ProvideTextCommandDefault{\textquotedblright}{% + \UseTextSymbol{OT1}\textquotedblright}% + \DeclareTextSymbol{\textquotedblright}{OT1}{`\"}% + \DeclareTextSymbol{\textquotedblright}{T1}{17}% + %% \DeclareTextSymbol{\textquotedbl}{T1}{`\"}% +}{} +\@ifundefined{textquoteleft}{% + \ProvideTextCommandDefault{\textquoteleft}{% + \UseTextSymbol{OT1}\textquoteleft}% + \DeclareTextSymbol{\textquoteleft}{OT1}{`\`}% + \DeclareTextSymbol{\textquoteleft}{T1}{`\`}% +}{} +\@ifundefined{textquoteright}{% + \ProvideTextCommandDefault{\textquoteright}{% + \UseTextSymbol{OT1}\textquoteright}% + \DeclareTextSymbol{\textquoteright}{OT1}{`\'}% + \DeclareTextSymbol{\textquoteright}{T1}{`\'}% +}{} +\@ifundefined{quotesinglbase}{% + \DeclareTextSymbol{\quotesinglbase}{T1}{13}}{} +\@ifundefined{quotedblbase}{% + \DeclareTextSymbol{\quotedblbase}{T1}{18}}{} +\@ifundefined{guillemotleft}{% + \DeclareTextSymbol{\guillemotleft}{T1}{19}}{} +\@ifundefined{guillemotright}{% + \DeclareTextSymbol{\guillemotright}{T1}{20}}{} +\@ifundefined{guilsinglleft}{% + \DeclareTextSymbol{\guilsinglleft}{T1}{14}}{} +\@ifundefined{guilsinglright}{% + \DeclareTextSymbol{\guilsinglright}{T1}{15}}{} +\fi +\expandafter\let\expandafter\protect\csname protect\endcsname +\def\allowhyphens{\penalty\@M \hskip\z@skip} +\lccode`\^^Y=`\^^Y +\def\set@low@box#1{\setbox\tw@\hbox{,}\setbox\z@\hbox{#1}% + \setbox\z@\hbox{\dimen@\ht\z@ \advance\dimen@ -\ht\tw@ + \lower\dimen@\box\z@}% + \ht\z@\ht\tw@ \dp\z@\dp\tw@} +\def\save@sf@q#1{{\ifhmode + \edef\@SF{\spacefactor\the\spacefactor}\else + \let\@SF\empty \fi \leavevmode #1\@SF}} +\expandafter\ifx\csname grmnU@D\endcsname\relax + \csname newdimen\endcsname\grmnU@D +\fi +\def\newumlaut#1{{\grmnU@D 1ex% + {\setbox\z@\hbox{\char127}\dimen@-.45ex\advance\dimen@\ht\z@ + \ifdim 1ex<\dimen@ \fontdimen5\font\dimen@ \fi}% + \accent127\fontdimen5\font\grmnU@D #1}\allowhyphens} +\begingroup\expandafter\expandafter\expandafter\endgroup +\expandafter\ifx\csname DeclareTextSymbol\endcsname\relax + \def\highumlaut#1{{\accent127 #1}\allowhyphens} +\else + \def\highumlaut#1{\"{#1}\allowhyphens} +\fi +\def\mdqon{\catcode`\"\active} +\def\mdqoff{\catcode`\"12\relax} +\begingroup \mdqoff +\def\x{\endgroup + \def\@MATHUMLAUT{\ddot}% = \mathaccent"707F + \def\@MATHss{\mathord{\mathchar"7019}}% TODO: correct? + \def\dq{"}}% TODO: or \textquotedbl? +\x +\begingroup + \def\do{\noexpand\do\noexpand}% + \edef\x{\endgroup + \def\noexpand\dospecials{\dospecials\do\"}}% +\x +\begingroup\expandafter\expandafter\expandafter\endgroup +\expandafter\ifx\csname @sanitize\endcsname\relax \else + \begingroup + \def\@makeother{\noexpand\@makeother\noexpand}% + \edef\x{\endgroup + \def\noexpand\@sanitize{\@sanitize\@makeother\"}}% + \x +\fi +\let\grmn@original@three=\3 % \3 may be defined or undefined. +\def\ck{% + \ifnum\grmn@dqwarninglevel>\@ne + \grmn@dq@warning@obsolete{\string\ck}{ck}% + \fi + \penalty\@M\-\allowhyphens ck} +\begingroup\expandafter\expandafter\expandafter\endgroup +\expandafter\ifx\csname DeclareTextSymbol\endcsname\relax +\expandafter\def\csname glqq \endcsname{% + \save@sf@q{\set@low@box{''\/}\box\z@\kern-.04em\allowhyphens}} +\edef\glqq{\noexpand\protect + \expandafter\noexpand\csname glqq \endcsname} +\let\@glqq=\glqq +\expandafter\def\csname grqq \endcsname{% + \save@sf@q{\kern-.07em``\kern.07em}}% ('') +\edef\grqq{\noexpand\protect + \expandafter\noexpand\csname grqq \endcsname} +\let\@grqq=\grqq +\expandafter\def\csname glq \endcsname{% + \save@sf@q{\set@low@box{'\/}\box\z@\kern-.04em\allowhyphens}} +\edef\glq{\noexpand\protect + \expandafter\noexpand\csname glq \endcsname} +\let\@glq=\glq +\expandafter\def\csname grq\endcsname{% + \save@sf@q{\kern-.0125em`\kern.07em}} +\edef\grq{\noexpand\protect + \expandafter\noexpand\csname grq \endcsname} +\let\@grq=\grq +\expandafter\def\csname flqq \endcsname{% + \relax\ifmmode \mathrel{\ll}\else \save@sf@q{\penalty\@M + \raise .27ex\hbox{$\m@th\scriptscriptstyle \ll $}% + \allowhyphens}\fi} +\edef\flqq{\noexpand\protect + \expandafter\noexpand\csname flqq \endcsname} +\let\@flqq=\flqq +\expandafter\def\csname frqq \endcsname{% + \relax\ifmmode \mathrel{\gg}\else \save@sf@q{\penalty\@M + \raise .27ex\hbox{$\m@th\scriptscriptstyle \gg $}% + \allowhyphens}\fi} +\edef\frqq{\noexpand\protect + \expandafter\noexpand\csname frqq \endcsname} +\let\@frqq=\frqq +\expandafter\def\csname flq \endcsname{% + \relax\ifmmode <\else \save@sf@q{\penalty\@M + \raise .27ex\hbox{$\m@th\scriptscriptstyle <$}\allowhyphens}\fi} +\edef\flq{\noexpand\protect + \expandafter\noexpand\csname flq \endcsname} +\let\@flq=\flq +\expandafter\def\csname frq \endcsname{% + \relax\ifmmode >\else \save@sf@q{\penalty\@M + \raise .27ex\hbox{$\m@th\scriptscriptstyle >$}\allowhyphens}\fi} +\edef\frq{\noexpand\protect + \expandafter\noexpand\csname frq \endcsname} +\let\@frq=\frq +\else +\DeclareRobustCommand{\glqq}{% + \ifmmode\hbox{\quotedblbase}\else\quotedblbase\fi} +\ProvideTextCommandDefault{\quotedblbase}{% + \UseTextSymbol{OT1}\quotedblbase} +\ProvideTextCommand{\quotedblbase}{OT1}{% + \save@sf@q{\set@low@box{\textquotedblright\/}\box\z@ + \kern-.04em\allowhyphens}} +\ProvideTextCommand{\grqq}{T1}{\textquotedblleft} +\ProvideTextCommand{\grqq}{OT1}{% + \save@sf@q{\kern-.07em% + \ifmmode\hbox{\textquotedblleft}\else\textquotedblleft\fi + \kern.07em\relax}} +\ProvideTextCommandDefault{\grqq}{\UseTextSymbol{OT1}\grqq} +\DeclareRobustCommand{\glq}{% + \ifmmode\hbox{\quotesinglbase}\else\quotesinglbase\fi} +\ProvideTextCommandDefault{\quotesinglbase}{% + \UseTextSymbol{OT1}\quotesinglbase} +\ProvideTextCommand{\quotesinglbase}{OT1}{% + \save@sf@q{\set@low@box{\textquoteright\/}\box\z@ + \kern-.04em\allowhyphens}} +\ProvideTextCommand{\grq}{T1}{\textquoteleft} +\ProvideTextCommand{\grq}{OT1}{% + \save@sf@q{\kern-.0125em% + \ifmmode\hbox{\textquoteleft}\else\textquoteleft\fi + \kern.07em\relax}} +\ProvideTextCommandDefault{\grq}{\UseTextSymbol{OT1}\grq} +\DeclareRobustCommand{\flqq}{% + \ifmmode\mathrel{\hbox{\guillemotleft}}\else\guillemotleft\fi} +\ProvideTextCommandDefault{\guillemotleft}{% + \UseTextSymbol{OT1}\guillemotleft} +\ProvideTextCommand{\guillemotleft}{OT1}{% + \ifmmode \ll \else \save@sf@q{\penalty\@M + \raise .27ex\hbox{$\m@th\scriptscriptstyle \ll $}% + \allowhyphens}\fi} +\DeclareRobustCommand{\frqq}{% + \ifmmode\mathrel{\hbox{\guillemotright}}\else\guillemotright\fi} +\ProvideTextCommandDefault{\guillemotright}{% + \UseTextSymbol{OT1}\guillemotright} +\ProvideTextCommand{\guillemotright}{OT1}{% + \ifmmode \gg \else \save@sf@q{\penalty\@M + \raise .27ex\hbox{$\m@th\scriptscriptstyle \gg $}% + \allowhyphens}\fi} +\DeclareRobustCommand{\flq}{% + \ifmmode\mathbin{\hbox{\guilsinglleft}}\else\guilsinglleft\fi} +\ProvideTextCommandDefault{\guilsinglleft}{% + \UseTextSymbol{OT1}\guilsinglleft} +\ProvideTextCommand{\guilsinglleft}{OT1}{% + \ifmmode <\else \save@sf@q{\penalty\@M + \raise .27ex\hbox{$\m@th\scriptscriptstyle <$}\allowhyphens}\fi} +\DeclareRobustCommand{\frq}{% + \ifmmode\mathbin{\hbox{\guilsinglright}}\else\guilsinglright\fi} +\ProvideTextCommandDefault{\guilsinglright}{% + \UseTextSymbol{OT1}\guilsinglright} +\ProvideTextCommand{\guilsinglright}{OT1}{% + \ifmmode >\else \save@sf@q{\penalty\@M + \raise .27ex\hbox{$\m@th\scriptscriptstyle >$}\allowhyphens}\fi} +\fi +\begingroup\expandafter\expandafter\expandafter\endgroup +\expandafter\ifx\csname DeclareTextSymbol\endcsname\relax +\def\umlauthigh{\def\"##1{{\accent127 ##1}}} +\def\umlautlow{\def\"{\protect\newumlaut}} +\else +\def\umlauthigh{\def\grmn@OTumlaut##1{{\accent 127 ##1}}} +\def\umlautlow{\def\grmn@OTumlaut{\protect\newumlaut}} +\umlauthigh +\DeclareTextAccent{\"}{T1}{4} +\DeclareTextAccent{\"}{OT1}{127} +\DeclareTextCompositeCommand{\"}{OT1}{a}{\grmn@OTumlaut{a}}% +\DeclareTextCompositeCommand{\"}{OT1}{o}{\grmn@OTumlaut{o}}% +\DeclareTextCompositeCommand{\"}{OT1}{u}{\grmn@OTumlaut{u}}% +\DeclareTextCompositeCommand{\"}{OT1}{A}{\grmn@OTumlaut{A}}% +\DeclareTextCompositeCommand{\"}{OT1}{O}{\grmn@OTumlaut{O}}% +\DeclareTextCompositeCommand{\"}{OT1}{U}{\grmn@OTumlaut{U}}% +\DeclareTextComposite{\"}{T1}{a}{228} +\begingroup\expandafter\expandafter\expandafter\endgroup +\expandafter\ifx\csname AtBeginDocument\endcsname\relax \else + \AtBeginDocument{% + \DeclareTextAccent{\"}{T1}{4}% % from `ltpatch.tex' + \DeclareTextAccent{\"}{OT1}{127}% % from `ltpatch.tex' + % \DeclareTextCommand{\"}{OT1}{\newumlaut}% from `ltpatch.tex' + % % has to be removed + \DeclareTextCompositeCommand{\"}{OT1}{a}{\grmn@OTumlaut{a}}% + \DeclareTextCompositeCommand{\"}{OT1}{o}{\grmn@OTumlaut{o}}% + \DeclareTextCompositeCommand{\"}{OT1}{u}{\grmn@OTumlaut{u}}% + \DeclareTextCompositeCommand{\"}{OT1}{A}{\grmn@OTumlaut{A}}% + \DeclareTextCompositeCommand{\"}{OT1}{O}{\grmn@OTumlaut{O}}% + \DeclareTextCompositeCommand{\"}{OT1}{U}{\grmn@OTumlaut{U}}% + \DeclareTextComposite{\"}{T1}{a}{228}% % from `ltpatch.tex' + } +\fi +\fi +\def\dqwarninglevel#1{\chardef\grmn@dqwarninglevel=#1\relax} + \dqwarninglevel{2} +\expandafter\ifx\csname on@line\endcsname\relax + \ifx\inputlineno\undefined \def\on@line{}% + \else + \ifnum\inputlineno<\z@ \def\on@line{}% + \else \def\on@line{ on input line \the\inputlineno}% +\fi\fi\fi +\def\grmn@dq@error#1{% + \errhelp{Use `` for a simple double quote character.}% + \errmessage{ngerman: The command \dq\string#1 is undefined}} +\def\grmn@dq@warning#1{% + \immediate\write\sixt@@n + {ngerman: \dq\string#1 is possibly wrong\on@line.}} +\def\grmn@dq@warning@obsolete#1#2{% + \immediate\write\sixt@@n + {ngerman: #1 is now obsolete, please use #2 instead\on@line.}} +\def\grmn@dqobsolete@three#1{#1#1% + \ifnum\grmn@dqwarninglevel>\@ne + \grmn@dq@warning@obsolete{\dq#1#1}{#1#1#1}% + \fi + \penalty\@M\-\allowhyphens} +\def\grmn@dqobsolete@ck#1#2{% + \ifnum\grmn@dqwarninglevel>\@ne + \grmn@dq@warning@obsolete{\dq#1#2}{#1#2}% + \fi + \penalty\@M\-\allowhyphens#1} +\def\grmn@dq@macro#1#2{% + \expandafter#1\csname @grmn@@\string #2dq\endcsname} +\def\def@dqmacro#1#2#3{% + \grmn@dqredefcheck{#1}% + \grmn@dq@macro\def{#1}{{#2}{#3}}} +\def\let@dqmacro#1#2{\begingroup + \grmn@dqredefcheck{#1}% + \edef\x{\endgroup \let + \grmn@dq@macro\noexpand{#1}\grmn@dq@macro\noexpand{#2}}% + \x} +\def\grmn@dqredefcheck#1{} +\def\@active@dq#1{% + \grmn@dq@macro\ifx{#1}\relax + \ifnum\grmn@dqwarninglevel>\z@ \grmn@dq@error{#1}\fi + \expandafter\grmn@@normal@dq + \else + \expandafter\grmn@@active@dq + \fi {#1}} +\def\grmn@@active@dq#1{% + \grmn@dq@macro\ifx{#1}\noexpand + \expandafter\grmn@normal@dq + \else + \expandafter\grmn@@@active@dq + \fi {#1}} +\def\grmn@@normal@dq#1{``#1} +\def\grmn@normal@dq#1{\dq #1} +\begingroup + \catcode`\(=1\lccode`\(=`\{\catcode`\{=12 + \catcode`\)=2\lccode`\)=`\}\catcode`\}=12 + \catcode`\ =11\relax% <= do not delete this and the +\lowercase(\endgroup% <=== following percent characters! +\def\grmn@@@active@dq#1(% +\expandafter\grmn@@@@active@dq\expandafter{\string#1})% +\def\grmn@@@@active@dq(% +\ifx\protect\relax\else\ifx\protect\empty\else% +\expandafter\expandafter\expandafter\protect% +\fi\fi% +\active@dq \dq@prtct )% +\def\dq@prtct#1#(\@dq@prtct)% +\def\@dq@prtct#1(\string\dq@prtct{\string#1})% +\def\dq@prtct #1{#2}(\string\dq@prtct{\string#2})% +\def\active@dq #1{#2}(\grmn@active@@dq(#2))% +)% +\def\active@dq#1#{\@active@dq}% +\def\grmn@active@@dq#1{% + \csname grmn@dq\ifmmode second\else first\fi + \expandafter\expandafter\expandafter\expandafter + \grmn@dq@macro\endcsname{#1}} +\grmn@dq@macro\let{0}=\noexpand +\let@dqmacro{1}{0}\let@dqmacro{2}{0}\let@dqmacro{3}{0} +\let@dqmacro{4}{0}\let@dqmacro{5}{0}\let@dqmacro{6}{0} +\let@dqmacro{7}{0}\let@dqmacro{8}{0}\let@dqmacro{9}{0} +\let@dqmacro{A}{0}\let@dqmacro{B}{0}\let@dqmacro{C}{0} +\let@dqmacro{D}{0}\let@dqmacro{E}{0}\let@dqmacro{F}{0} +\def@dqmacro{}{\dq{}}{\dq{}} +\def@dqmacro{a}{\"a}{\@MATHUMLAUT a} +\def@dqmacro{o}{\"o}{\@MATHUMLAUT o} +\def@dqmacro{u}{\"u}{\@MATHUMLAUT u} +\def@dqmacro{A}{\"A}{\@MATHUMLAUT A} +\def@dqmacro{O}{\"O}{\@MATHUMLAUT O} +\def@dqmacro{U}{\"U}{\@MATHUMLAUT U} +\begingroup\expandafter\expandafter\expandafter\endgroup +\expandafter\ifx\csname DeclareTextSymbol\endcsname\relax + \def@dqmacro{s}{\ss{}}{\@MATHss} +\else + \def@dqmacro{s}{\ss}{\@MATHss} +\fi +\def@dqmacro{S}{\SS}{\SS} +\let@dqmacro{z}{s} +\def@dqmacro{Z}{SZ}{SZ} +\def@dqmacro{e}{\highumlaut e}{\@MATHUMLAUT e} +\def@dqmacro{E}{\highumlaut E}{\@MATHUMLAUT E} +\def@dqmacro{i}{\highumlaut{\i}}{\@MATHUMLAUT\imath} +\def@dqmacro{I}{\highumlaut I}{\@MATHUMLAUT I} +\def@dqmacro{`}{\glqq}{\glqq} +\def@dqmacro{'}{\grqq}{\grqq} +\def@dqmacro{<}{\flqq}{\flqq} +\def@dqmacro{>}{\frqq}{\frqq} +\def@dqmacro{-}{\penalty\@M\-\allowhyphens}% + {\penalty\@M\-\allowhyphens} +\def@dqmacro{|}{\penalty\@M\discretionary{-}{}{\kern.03em}% + \allowhyphens}{} +\def@dqmacro{"}{\hskip\z@skip}{\hskip\z@skip} +\def@dqmacro{~}{\leavevmode\hbox{-}}{-} +\def@dqmacro{=}{\penalty\@M-\hskip\z@skip}% + {\penalty\@M-\hskip\z@skip} +\def@dqmacro{c}{\grmn@dqobsolete@ck ck}{c} +\def@dqmacro{C}{\grmn@dqobsolete@ck CK}{C} +\def@dqmacro{l}{\grmn@dqobsolete@three l}{l} +\def@dqmacro{L}{\grmn@dqobsolete@three L}{L} +\def@dqmacro{m}{\grmn@dqobsolete@three m}{m} +\def@dqmacro{M}{\grmn@dqobsolete@three M}{M} +\def@dqmacro{n}{\grmn@dqobsolete@three n}{n} +\def@dqmacro{N}{\grmn@dqobsolete@three N}{N} +\def@dqmacro{p}{\grmn@dqobsolete@three p}{p} +\def@dqmacro{P}{\grmn@dqobsolete@three P}{P} +\def@dqmacro{r}{\grmn@dqobsolete@three r}{r} +\def@dqmacro{R}{\grmn@dqobsolete@three R}{R} +\def@dqmacro{t}{\grmn@dqobsolete@three t}{t} +\def@dqmacro{T}{\grmn@dqobsolete@three T}{T} +\def@dqmacro{F}{\grmn@dqobsolete@three F}{F} +\def@dqmacro{f}{\grmn@dqobsolete@three f}{f} +\def\grmn@dqredefcheck#1{% + \wlog{ngerman: \grmn@dq@macro\ifx{#1}\relax \else re\fi + defining dq-command for `\string#1'\on@line.}}% +\def\month@ngerman{\ifcase\month \or + Januar\or Februar\or M\"arz\or April\or Mai\or Juni\or + Juli\or August\or September\or Oktober\or November\or Dezember\fi} +\def\datengerman{\def\today{\number\day.~\month@ngerman + \space\number\year}} +\def\datenaustrian{\def\today{\number\day.~\ifnum 1=\month + J\"anner\else \month@ngerman\fi \space\number\year}} +\def\month@english{\ifcase\month \or + January\or February\or March\or April\or May\or June\or + July\or August\or September\or October\or November\or December\fi} +\def\dateUSenglish{\def\today{\month@english + \space\number\day, \number\year}} +\def\dateenglish{\def\today{\number\day \ifcase\day \or + st\or nd\or rd\or th\or th\or th\or th\or th\or th\or th\or% 1..10 + th\or th\or th\or th\or th\or th\or th\or th\or th\or th\or% 11..20 + st\or nd\or rd\or th\or th\or th\or th\or th\or th\or th\or% 21..30 + st\fi + ~\month@english \space\number\year}} +\def\datefrench{\def\today{\number\day \ifnum1=\day \/$^{\rm er}$\fi + \space\ifcase\month \or + janvier\or f\'evrier\or mars\or avril\or mai\or juin\or + juillet\or ao\^ut\or septembre\or + octobre\or novembre\or d\'ecembre\fi + \space\number\year}} +\def\captionsngerman{% + \def\prefacename{Vorwort}% + \def\refname{Literatur}% + \def\abstractname{Zusammenfassung}% + \def\bibname{Literaturverzeichnis}% + \def\chaptername{Kapitel}% + \def\appendixname{Anhang}% + \def\contentsname{Inhaltsverzeichnis}% % oder nur: Inhalt + \def\listfigurename{Abbildungsverzeichnis}% + \def\listtablename{Tabellenverzeichnis}% + \def\indexname{Index}% + \def\figurename{Abbildung}% + \def\tablename{Tabelle}% % oder: Tafel + \def\partname{Teil}% + \def\enclname{Anlage(n)}% % oder: Beilage(n) + \def\ccname{Verteiler}% % oder: Kopien an + \def\headtoname{An}% + \def\pagename{Seite}% + \def\seename{siehe}% + \def\alsoname{siehe auch}} +\let\captionsnaustrian=\captionsngerman +\def\captionsenglish{% + \def\prefacename{Preface}% + \def\refname{References}% + \def\abstractname{Abstract}% + \def\bibname{Bibliography}% + \def\chaptername{Chapter}% + \def\appendixname{Appendix}% + \def\contentsname{Contents}% + \def\listfigurename{List of Figures}% + \def\listtablename{List of Tables}% + \def\indexname{Index}% + \def\figurename{Figure}% + \def\tablename{Table}% + \def\partname{Part}% + \def\enclname{encl}% + \def\ccname{cc}% + \def\headtoname{To}% + \def\pagename{Page}% + \def\seename{see}% + \def\alsoname{see also}} +\let\captionsUSenglish=\captionsenglish +\def\captionsfrench{% + \def\prefacename{Pr\'eface}% + \def\refname{R\'ef\'erences}% + \def\abstractname{R\'esum\'e}% + \def\bibname{Bibliographie}% + \def\chaptername{Chapitre}% + \def\appendixname{Annexe}% + \def\contentsname{Table des mati\`eres}% + \def\listfigurename{Liste des figures}% + \def\listtablename{Liste des tableaux}% + \def\indexname{Index}% + \def\figurename{Figure}% + \def\tablename{Tableau}% + \def\partname{Partie}% + \def\enclname{P.~J.}% + \def\ccname{Copie \`a}% + \def\headtoname{A}% + \def\pagename{Page}% + \def\seename{voir}% + \def\alsoname{voir aussi}}% +\def\extrasUSenglish{} +\let\noextrasUSenglish=\extrasUSenglish +\let\extrasenglish=\extrasUSenglish +\let\noextrasenglish=\extrasenglish +\def\extrasngerman{\frenchspacing \uchyph\@ne + \lefthyphenmin\tw@ \righthyphenmin\tw@} +\def\noextrasngerman{% + \ifnum\sfcode`\.=\@m \else \noexpand\nonfrenchspacing \fi + \uchyph\the\uchyph\relax + \lefthyphenmin\the\lefthyphenmin + \righthyphenmin\the\righthyphenmin} +\let\extrasnaustrian=\extrasngerman +\let\noextrasnaustrian=\noextrasngerman +\def\extrasfrench{\frenchspacing} +\def\noextrasfrench{% + \ifnum\sfcode`\.=\@m \else \noexpand\nonfrenchspacing \fi} +\@ifundefined{l@USenglish}{% + \@ifundefined{l@english}{\chardef\l@USenglish=255 }% + {\chardef\l@USenglish=\l@english}% + \wlog{ngerman -- \string\language\space number for USenglish % + undefined, default \number\l@USenglish\space used.}% +}{} +\@ifundefined{l@english}{% + \chardef\l@english=\l@USenglish + \wlog{ngerman -- \string\language\space number for UKenglish % + undefined, default \number\l@english\space used.}% +}{} +\@ifundefined{l@ngerman}{% + \@ifundefined{l@naustrian}{% + \chardef\l@ngerman=255 % + \message{ngerman -- \string\language\space number for ngerman % + undefined, default \number\l@ngerman\space used,}% + \message{ngerman -- Please read \string"gerdoc.tex\string" how % + to install hyphenation patterns.}% + }{% + \chardef\l@ngerman=\l@naustrian + \wlog{ngerman -- \string\language\space number for ngerman % + undefined, default \number\l@ngerman\space used.}% + }% +}{} +\@ifundefined{l@naustrian}{% + \chardef\l@naustrian=\l@ngerman + \wlog{ngerman -- \string\language\space number for naustrian % + undefined, default \number\l@naustrian\space used.}% +}{} +\@ifundefined{l@french}{% + \chardef\l@french=255 + \wlog{ngerman -- \string\language\space number for French % + undefined, default \number\l@french\space used.}% +}{} +\def\grmn@originalTeX{} +\def\languagename{} +\expandafter\def\csname selectlanguage \endcsname#1{\relax + \expandafter\ifx\csname l@#1\endcsname\relax + \errhelp{Your command will be ignored, type to proceed}% + \errmessage{You haven't defined the language #1 yet}% + \else + \grmn@originalTeX + \edef\languagename{#1}% + \edef\grmn@originalTeX{\csname noextras#1\endcsname + \def\noexpand\grmn@originalTeX{}}% + \csname date#1\endcsname + \csname captions#1\endcsname + \csname extras#1\endcsname\relax + % Diese Zeile ist fuer `bibgerm' ... + \csname bibs#1\endcsname + % ... sie wird in spaeteren `german.sty'-Versionen nicht + % mehr vorhanden sein. Also nicht darauf verlassen! + \language \csname l@#1\endcsname\relax + \fi} +\begingroup\catcode`\ =11\relax% <= do not delete this and the +\toks0={\endgroup% <=== following percent characters! +\def\selectlanguage#1{\protect\selectlanguage {% +\ifnum\escapechar=\expandafter`\string#1\empty% +\else\string#1\empty\fi}}}% +\the\toks0\relax% +\def\p@selectlanguage{\selectlanguage} +\def\iflanguage#1{% + \ifx\csname l@#1\endcsname\relax + \expandafter\grmn@dqsecond + \else \ifnum\csname l@#1\endcsname=\language + \expandafter\expandafter\expandafter\grmn@dqfirst + \else + \expandafter\expandafter\expandafter\grmn@dqsecond + \fi\fi +} +\expandafter\ifx\csname language\endcsname\relax + \csname newcount\endcsname\language + \language=0 \fi +\expandafter\ifx\csname lefthyphenmin\endcsname\relax + \csname newcount\endcsname\lefthyphenmin + \lefthyphenmin=2 \fi +\expandafter\ifx\csname righthyphenmin\endcsname\relax + \csname newcount\endcsname\righthyphenmin + \righthyphenmin=3 \fi +\expandafter\ifx\csname setlanguage\endcsname\relax + \def\setlanguage{\relax + \ifhmode \else + \errhelp{Use \selectlanguage to switch languages.}% + \errmessage{\setlanguage allowed only in horizontal mode}% + \fi + \begingroup\afterassignment\endgroup\count@=} +\fi +\begingroup \mdqon +\def\x{\endgroup + \def\originalTeX{\mdqoff \let"\dq \umlauthigh + \let\3\grmn@original@three + \selectlanguage{USenglish}}% + \def\ngermanTeX{\mdqon \let"\@active@dq \umlautlow + \let\grmn@original@three\3\let\3\ss + \selectlanguage{ngerman}}}% +\x +\catcode`\@=\atcode % return to previous catcode +\ngermanTeX +\endinput +%% +%% End of file `ngerman.sty'. -- 2.25.1