51
BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS 1 RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 1 LANGUAGE TRANSLATORS UNIT: 3 Syllabus Source Program Analysis: Compilers Analysis of the Source Program Phases of a Compiler Cousins of Compiler Grouping of Phases Compiler Construction Tools. Lexical Analysis: Role of Lexical Analyzer Input Buffering Specification of Tokens Recognition of Tokens A Language for Specifying Lexical Analyzer. Text Book Alfred Aho, V. Ravi Sethi, and D. Jeffery Ullman, “Compilers Principles, Techniques and Tools”, Addison-Wesley, 1988. Compiler A compiler is a program that can read a program in one language the source language and translate it into an equivalent program in another language the target language. It is also expected that a compiler should make the target code efficient and optimized in terms of time and space. An important role of the compiler is to report any errors in the source program that it detects during the translation process. Commonly, the source language is a high-level programming language (i.e. a problem- oriented language), and the target language is a machine language or assembly language (i.e. a machine-oriented language). Thus compilation is a fundamental concept in the production of software: it is the link between the (abstract) world of application development and the low-level world of application execution on machines. Compiler design principles provide an in-depth view of translation and optimization process. Compiler design covers basic translation mechanism and error detection & recovery. It includes lexical, syntax, and semantic analysis as front end, and code generation and optimization as back-end. ------- Language Processing System (Cousins of Complier)

SUBJECT: LANGUAGE TRANSLATORS - Rajiv Gandhi … YEAR/LANGUAGE TRANSLATORS/U… · subject: language translators 1 rajiv gandhi college of engineering & technology/ dept. of cse page

Embed Size (px)

Citation preview

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

1

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 1

LANGUAGE TRANSLATORS UNIT: 3

Syllabus

Source Program Analysis: Compilers – Analysis of the Source Program – Phases of a

Compiler – Cousins of Compiler – Grouping of Phases – Compiler Construction Tools.

Lexical Analysis: Role of Lexical Analyzer – Input Buffering – Specification of Tokens –

Recognition of Tokens –A Language for Specifying Lexical Analyzer.

Text Book

Alfred Aho, V. Ravi Sethi, and D. Jeffery Ullman, “Compilers Principles, Techniques

and Tools”, Addison-Wesley, 1988.

Compiler

A compiler is a program that can read a program in one language — the source language and

translate it into an equivalent program in another language — the target language. It is also

expected that a compiler should make the target code efficient and optimized in terms of time

and space. An important role of the compiler is to report any errors in the source program that it

detects during the translation process.

Commonly, the source language is a high-level programming language (i.e. a problem-

oriented language), and the target language is a machine language or assembly language (i.e.

a machine-oriented language). Thus compilation is a fundamental concept in the production

of software: it is the link between the (abstract) world of application development and the

low-level world of application execution on machines.

Compiler design principles provide an in-depth view of translation and optimization

process. Compiler design covers basic translation mechanism and error detection &

recovery. It includes lexical, syntax, and semantic analysis as front end, and code generation

and optimization as back-end.

-------

Language Processing System (Cousins of Complier)

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

2

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 2

In addition to a compiler, several other programs may be required to create an executable

target program.

A source program may be divided into modules stored in separate files.

The task of collecting the source program is sometimes entrusted to a separate program,

called a preprocessor. The preprocessor may also expand shorthands, called macros, into

source language statements. The modified source program is then fed to a compiler.

The compiler may produce an assembly-language program as its output, because

assembly language is easier to produce as output and is easier to debug.

The assembly language is then processed by a program called an assembler that produces

relocatable machine code as its output.

Large programs are often compiled in pieces, so the relocatable machine code may have

to be linked together with other relocatable object files and library files into the code that

actually runs on the machine. The linker resolves external memory addresses, where the

code in one file may refer to a location in another file.

The loader then puts together all of the executable object files into memory for execution.

I) Preprocessor: A preprocessor is a program that processes its input data to produce output that

is used as input to another program. The preprocessor is executed before the actual compilation

of code begins. They may perform the following functions

1. Macro processing 2. File Inclusion 3."Rational Preprocessors 4. Language extension

1. Macro processing: A macro is a rule or pattern that specifies how a certain input sequence

(often a sequence of characters) should be mapped to an output sequence (also often a sequence

of characters) according to a defined procedure.

Macro definitions (#define, #undef)

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

3

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 3

When the preprocessor encounters this directive, it replaces any occurrence of identifier in the

rest of the code by replacement.

Example:

#define TABLE_SIZE 100

int table1[TABLE_SIZE]; After the preprocessor has replaced TABLE_SIZE, the code becomes

equivalent to: int table1[100];

2. File Inclusion

Preprocessor includes header files into the program text. When the preprocessor finds an

#include directive it replaces it by the entire content of the specified file. There are two ways to

specify a file to be included:

#include "file" and #include <file>

The only difference between both expressions is the places (directories) where the compiler is

going to look for the file.

In the first case where the file name is specified between double-quotes, the file is searched

first in the same directory that includes the file containing the directive. In case that it is not

there, the compiler searches the file in the default directories where it is configured to look for

the standard header files.

If the file name is enclosed between angle-brackets <> the file is searched directly where the

compiler is configured to look for the standard header files. Therefore, standard header files are

usually included in angle-brackets, while other specific header files are included using quotes.

3."Rational Preprocessors:

These processors augment older languages with more modern flow of control and data

structuring facilities. For example, such a preprocessor might provide the user with built-in

macros for constructs like while-statements or if-statements,where none exist in the

programming language itself.

4. Language extension :

These processors attempt to add capabilities to the language by what amounts to built-in macros.

For example, the language equal is a database query language embedded in C. Statements

begging with ## are taken by the preprocessor to perform the database access

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

4

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 4

II) Assembler:

Typically, a modern assembler creates object code by translating assembly instruction

mnemonics into opcodes, and by resolving symbolic names for memory locations and other

entities. There are two types of assemblers based on how many passes through the source are

needed to produce the executable program.

One –pass

Two -Pass

One-pass assembler goes through the source code once and assumes that all symbols will be

defined before any instruction that references them. Two-pass assemblers create a table with all

symbols and their values in the first pass, and then use the table in a second pass to generate

code.

III) Linkers and Loaders:

Compilers, assemblers and linkers usually produce code whose memory references are made

relative to an undetermined starting location that can be anywhere in memory (relocatable

machine code). A loader calculates appropriate absolute addresses for these memory

locations and amends the code to use these addresses. The process of loading consists of

taking relocatable machine code, altering the relocatable addresses and placing the

altered instructions and data in memory at the proper locations.

A linker combines object code (machine code that has not yet been linked) produced from

compiling and assembling many source programs, as well as standard library functions and

resources supplied by the operating system. This involves resolving references in each

object file to external variables and procedures declared in other files. A linker or link

editor is a program that takes one or more objects generated by a compiler and

combines them into a single executable program.

------

ANALYSIS OF THE SOURCE PROGRAM

The analysis phase breaks up the source program into constituent pieces and creates an

intermediate representation of the source program. Analysis consists of three phases:

• Linear analysis

• Hierarchical analysis

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

5

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 5

• Semantic analysis

Linear analysis (Lexical analysis or Scanning) :

The lexical analysis phase reads the characters in the source program and grouped them as

tokens that are sequence of characters having a collective meaning.

Example: position: = initial + rate * 10

Identifiers – position, initial, rate.

Assignment symbol - : =

Operators - +, *

Number – 10

Blanks – Eliminated

Hierarchical analysis (Syntax analysis or Parsing) :

It involves grouping the tokens of the source program hierarchically into nested collections

that are used by the complier to synthesize output.

Semantic analysis :

In this phase checks the source program for semantic errors and gathers type information for

subsequent code generation phase. An important component of semantic analysis is type

checking.

Example : int to real conversion

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

6

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 6

Analysis – Synthesis Model of Compilation

The process of compilation has two parts namely : Analysis and Synthesis

The analysis part is often called the front end of the compiler; the synthesis part is the back

end of the compiler.

Analysis :The analysis part breaks up the source program into constituent pieces and creates an

intermediate representation of the source program. The front end analyzes the source program,

determines its constituent parts, and constructs an intermediate representation of the program.

Typically the front end is independent of the target language.

Synthesis : The synthesis part constructs the desired target program from the intermediate

representation . The back end synthesizes the target program from the intermediate representation

produced by the front end. Typically the back end is independent of the source language.

Phases of a Compiler

A compiler operates in phases. A phase is a logically interrelated operation that takes source

program in one representation and produces output in another representation. The different

phases are as follows:

1. Lexical analysis (“scanning”)

o Reads in program, groups characters into “tokens”

2. Syntax analysis (“parsing”)

o Structures token sequence according to grammar rules of the language.

3. Semantic analysis

o Checks semantic constraints of the language.

4. Intermediate code generation

o Translates to “lower level” representation.

5. Code optimization

o Improves code quality.

6. Final code generation.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

7

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 7

Front end : machine independent phases

1. Lexical analysis

2. Syntax analysis

3. Semantic analysis

4. Intermediate code generation

Back end : machine dependent phases

5. Code Optimization

6. Target Code Generation

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

8

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 8

Lexical Analysis

The first phase of a compiler is called lexical analysis linear analysis or scanning.The lexical

analyzer reads the stream of characters making up the source program and groups the

characters into meaningful sequences called lexemes. For each lexeme, the lexical analyzer

produces as output a token of the form - (token-name, attribute-value) , that it passes on to

the subsequent phase, syntax analysis.

For example, suppose a source program contains the assignment statement

p o s i t i o n := i n i t i a l + r a t e * 60

The characters in this assignment could be grouped into the following lexemes and mapped

into the following tokens passed on to the syntax analyzer:

1. The identifier position

2. The assignment symbol: =

3. The identifier initial

4. The plus sign

5. The identifier rate

6. The multiplication sign

7. The number 60

The blanks separating the characters are eliminated during lexical analysis

Syntax Analysis

The second phase of the compiler is syntax analysis or hierarchical analysis or parsing. In this

phase expressions, statements, declarations etc… are identified by using the results of lexical

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

9

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 9

analysis. The tokens from the lexical analyzer are grouped hierarchically into nested collections

with collective meaning. Syntax analysis is aided by using techniques based on formal grammar

of the programming language. This is represented using a parse tree.

The tokens from the lexical analyzer are grouped hierarchically into nested collections

with collective meaning called “Parse Tree” followed by syntax tree as output.

A Syntax Tree is a compressed representation of the parse tree in which the operators

appears as interior nodes & the operands as child nodes.

Semantic Analysis

The semantic analyzer uses the syntax tree and the information in the symbol table to check

the source program for semantic consistency with the language definition. It also gathers

type information and saves it in either the syntax tree or the symbol table, for subsequent use

during intermediate-code generation. An important part of semantic analysis is type

checking, where the compiler checks that each operator has matching operands. For

example, a binary arithmetic operator may be applied to either a pair of integers or to a pair

of floating-point numbers. If the operator is applied to a floating-point number and an

integer, the compiler may convert the integer into a floating-point number. For the above

syntax tree we apply the type conversion considering all the identifiers to be real values, we

get

:=

id1 +

id2

*

id3 60

:=

id1 +

id2 *

id3 inttoreal

60

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

10

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 10

Intermediate Code Generation

Intermediate code should possess the following properties

IC should be easily generated from the semantic representation of the source program

Should be easy to translate the IC to Target Program

Should be capable of holding the values computed during translation

Should maintain precedence ordering of the source language

Should be capable of holding the correct number of operands of the instruction.

An intermediate form called three-address code is considered, which consists of a sequence

of assembly-like instructions with three operands per instruction. Properties of three-address

instructions.

1. Each three-address assignment instruction has at most one operator on the right side.

2. The compiler must generate a temporary name to hold the value computed by a three-

address instruction.

3. Some "three-address instructions may have fewer than three operands.

Three address Code – consists of a sequence of instructions, each of which has at most

three operands; Eg: A =B+ C , A = B; Sum = 10

temp1 = inttoreal(10)

temp 2= id3 * temp 1

temp 3 = id2 + temp 2

id 1 = temp3

Code Optimization

The machine-independent code-optimization phase attempts to improve the intermediate

code so that better target code will result. There is a great variation in the amount of code

optimization different compilers perform. Those that do the most, are called "optimizing

compilers."A significant amount of time is spent on this phase. There are simple

optimizations that significantly improve the running time of the target program without

slowing down compilation too much. Aim – to improve on the intermediate code to

generate a code that runs faster and (or) occupies less space in memory.

Compilation speed Vs Execution Speed

Two optimization techniques

Local Optimization

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

11

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 11

Elimination of common sub expression copy propagation

Loop Optimization

Finding out loop invariants & avoiding them

Optimized Code

temp1 := id3 * 10.0

id1 := id2 + temp1

Code Generation

The code generator takes as input an intermediate representation of the source program and

maps it into the target language. If the target language is machine code, registers or memory

locations are selected for each of the variables used by the program. Then, the intermediate

instructions are translated into sequences of machine instructions that perform the same task.

The final phase of the compiler is the generation of target code, consisting normally of

relocatable machine code or assembly code.

Memory locations are selected for each of the variables used by the program. Then,

intermediate instructions are each translated into a sequence of machine instructions that

perform the same task.

A crucial aspect is the assignment of variables to registers.

MOVF id3, R2

MULF #10.0, R2

MOVF id2, R1

ADDF R2, R1

MOVF R1, id1

THE STRUCTURE OF COMPILER

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

12

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 12

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

13

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 13

Symbol-Table Management

An essential function of a compiler is to record the variable names used in the source

program and collect information about various attributes of each name.

These attributes may provide information about the storage allocated for a name, its type, its

scope (where in the program its value may be used), and in the case of procedure names,

such things as the number and types of its arguments, the method of passing each argument

(for example, by value or by reference), and the type returned.

The symbol table is a data structure containing a record for each variable name, with fields

for the attributes of the name. When an identifier in the source program is detected by the lex

analyzer, the identifier is entered into the Symbol Table

The data structure should be designed to allow the compiler to find the record for each name

quickly and to store or retrieve data from that record quickly.

Address Symbol Attribute Memory

Location

1 Position id1, real 1000

2 = Operator 1100

3 Initial

4 +

5 Rate

6 *

7 10

Error Detection and Reporting

Each phase can encounter errors. Features of the compiler is to detect & report errors.

Lexical Analysis --- Characters may be misspelled

Syntax Analysis --- Structure of the statement violates the rules of the language

Semantic Analysis --- No meaning in the operation involved

Intermediate Code Generation --- Operands have incompatible data types

Code Optimizer --- Certain Statements may never be reached

Code Generation --- Constant is too long

Symbol Table --- Multiple declared variables

The syntax and semantic analysis phases usually handle a large fraction of the errors

detectable by the compiler. The lexical phase can detect errors where the characters

remaining in the input do not form any token of the language. Errors when the token

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

14

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 14

stream violates the syntax of the language are determined by the syntax analysis phase.

During semantic analysis the compiler tries to detect constructs that have the right

syntactic structure but no meaning to the operation involved.

After detecting an error, a phase must be able to recover from the error so that

compilation can proceed and allow further errors to be detected.

A compiler which stops after detecting the first error is not useful. On detecting an error

the compiler must:

report the error in a helpful way,

correct the error if possible, and

Continue processing (if possible) after the error to look for further errors.

---- ----- -----

Grouping Of Phases

Activities from more than one phase are often grouped together. The phases are collected into a

front end and a back end

Front End:

The Front End consists of those phases or parts of phases that depends primarily on

the source language and is largely independent of target machine.

Lexical and syntactic analysis, symbol table, semantic analysis and the generation of

intermediate code is included.

Certain amount of code optimization can be done by the front end.

It also includes error handling that goes along with each of these phases.

Back End:

The Back End includes those portions of the compiler that depend on the target

machine and these portions do not depend on the source language.

Find the aspects of code optimization phase, code generation along with necessary

error handling and symbol table operations.

Passes: Several phases of compilation are usually implemented in a single pass consisting

of reading an input file and writing an output file.

It is common for several phases to be grouped into one pass, and for the activity of these

phases to be interleaved during the pass.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

15

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 15

Eg: Lexical analysis, syntax analysis, semantic analysis and intermediate code generation

might be grouped into one pass. If so, the token stream after lexical analysis may be

translated directly into intermediate code.

Reducing the number of passes: It is desirable to have relatively few passes, since it takes time

to read and write intermediate files.

On reducing the number of passes , the entire information of the pass has to be stored in

the temp memory. This increases the memory space needed to store the information

Lexical Analysis + Syntax Analysis

Code Generation cannot be done before IC generation

Intermediate and target code generation – Backpatching (Address of the branch

instruction can be left blank and can be filled in when the information is

available)

Compiler-Construction Tools

Compiler Construction tools are the tools that have been created for automatic design of

specific compiler components. Some commonly used compiler-construction tools include

1. Parser generator

2. Scanner generator

3. Syntax-directed translation engine

4. Automatic code generator

5. Data flow engine

Parser generators

- produce syntax analyzers from input that is based on context-free grammar.

- Earlier, syntax analysis consumed large fraction of the running time of a compiler +

large fraction of the intellectual effort of writing a compiler.

- This phase is now considered as one of the easiest to implement.

- Many parser generators utilize powerful parsing algorithms that are too complex to be

carried out by hand.

Scanner generators

- Automatically generates lexical analyzers from a specification based on regular

expression.

- The basic organization of the resulting lexical analyzers is finite automation.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

16

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 16

Syntax-directed translation engines

- produce collections of routines that walk a parse tree and generating intermediate code.

- The basic idea is that one or more “translations” are associated with each node of the

parse tree.

- Each translation is defined in terms of translations at its neighbor nodes in the tree.

Automatic Coder generators

- A tool takes a collection of rules that define the translation of each operation of the

intermediate language into the machine language for a target machine.

- The rules must include sufficient details that we can handle the different possible access

methods for data.

Data-flow analysis engines

- gathering of information about how values are transmitted from one part of a program

to each other part.

- Data-flow analysis is a key part of code optimization.

RECOGNIZATION OF THE TOKENS

The tokens obtained during lexical analysis are recognized using a finite automaton.

Finit e Automata

We shall now discover how Lex turns its input program into a lexical analyzer. At the heart of

the transition is the formalism known as finite automata. These are essentially graphs, like

transition diagrams, with a few differences:

1. Finite automata are recognizers; they simply say "yes" or "no" about each possible input

string.

2. Finite automata come in two flavors:

(a) Nondeterministic finite automata (NFA) have no restrictions on the labels of their edges. A

symbol can label several edges out of the same state, and E, the empty string, is a possible label.

(b) Deterministic finite automata (DFA) have, for each state, and for each symbol of its input

alphabet exactly one edge with that symbol leaving that state.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

17

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 17

Both deterministic and nondeterministic finite automata are capable of recognizing the same

languages. In fact these languages are exactly the same languages, called the regular languages.

Nondeterministic Finite Automata

A nondeterministic finite automaton (NFA) consists of:

1. A finite set of states S.

2. A set of input symbols, the input alphabet. We assume that E, which stands for the empty

string, is never a member of .

3. A transition function that gives, for each state, and for each symbol in U { } a set of next

states.

4. A state 80 from S that is distinguished as the start state (or initial state) .

5. A set of states F, a subset of S, that is distinguished as the accepting states (or final states) .

We can represent either an NFA or DFA by a transition graph, where the nodes are states and the

labeled edges represent the transition function. There is an edge labeled a from state 8 to state t if

and only if t is one of the next states for state 8 and input a . This graph is very much like a

ransition diagram, except:

a) The same symbol can label edges from one state to several different states, and

b) An edge may be labeled by E, the empty string, instead of, or in addition to, symbols from the

input alphabet.

A transition diagram is a finite directed graph in which each vertex represents a

state and directed edges indicate the transition from one state to another. Edges are

labeled with input or output in this representation the initial state is represented by a

circle with an arrow towards it. The final state by two concentric circles and the other

intermediate states are represented by circle.

Deterministic Finite Automata

A deterministic finite automaton (DFA) is a special case of an NFA where:

1. There are no moves on input E, and

2. For each state S and input symbol a, there is exactly one edge out of s labeled a.

If we are using a transition table to represent a dfa, then each entry is a single state. we may

therefore represent this state without the curly braces that we use to form sets.

While the nfa is an abstract representation of an algorithm to recognize the strings of a certain

language, the dfa is a simple, concrete algorithm for recognizing strings. it is fortunate indeed

that every regular expression and every nfa can be converted to a dfa accepting the same

language, because it is the dfa that we really implement or simulate when building lexical

analyzers.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

18

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 18

CONSTRUCTION OF AN NFA FROM A REGULAR EXPRESSION

We now give an algorithm for converting any regular expression to an NFA that defines the

same language. The algorithm is syntax- directed, in the sense that it works recursively up the

parse tree for the regular expression. For each subexpression the algorithm constructs an NFA

with a single accepting state.

The McNaughton-Yamada- Thompson algorithm to convert a regular expression to an NFA.

INPUT: A regular expressioll r over alphabet E.

()UTPUT: An NFA N accepting L(r) .

METHOD : Begin by parsing r into its constituent subexpressions. The rules for constructing an

NFA consist of basis rules for handling subexpressiolls with no operators, and inductive rules for

constructing larger NFA's from the NFA's for the immediate sub expressions of a given

expression.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

19

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 19

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

20

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 20

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

21

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 21

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

22

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 22

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

23

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 23

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

24

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 24

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

25

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 25

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

26

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 26

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

27

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 27

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

28

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 28

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

29

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 29

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

30

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 30

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

31

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 31

COUSINS OF THE COMPILER. (11)

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

32

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 32

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

33

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 33

THE ROLE OF THE LEXICAL ANALYZER (11)

The main task of the lexical analyzer is to read the input characters of the source program,

group them into lexemes, and produce as output a sequence of tokens for each lexeme in the

source program. The stream of tokens is sent to the parser for syntax analysis. It is common

for the lexical analyzer to interact with the symbol table as well. When the lexical analyzer

discovers a lexeme constituting an identifier, it needs to enter that lexeme into the symbol

table. In some cases, information regarding the kind of identifier may be read from the

symbol table by the lexical analyzer to assist it in determining the proper token it must pass

to the parser.

These interactions are suggested in Fig. 3 . 1 . Commonly, the interaction is

implemented by having the parser call the lexical analyzer. The call, suggested by the

getNextToken command, causes the lexical analyzer to read characters from its input until it

can identify the next lexeme and produce for it the next token, which it returns to the parser.

Since the lexical analyzer is the part of the compiler that reads the source text, it may

perform certain other tasks besides identification of lexemes. One such task is stripping out

comments and whitespace (blank, newline, tab, and perhaps other characters that are used to

separate tokens in the input). Another task is correlating error messages generated by the

compiler with the source program. For instance, the lexical analyzer may keep track of the

number of newline characters seen, so it can associate a line number with each error

message. In some compilers, the lexical analyzer makes a copy of the source program with

the error messages inserted at the appropriate positions. If the source program uses a macro-

preprocessor, the expansion of macros may also be performed by the lexical analyzer.

Sometimes, lexical analyzers are divided into a cascade of two processes:

a) Scanning consists of the simple processes that do not require tokenization of the

input, such as deletion of comments and compaction of consecutive whitespace characters

into one.

b) Lexical analysis proper is the more complex portion, where the scanner produces

the sequence of tokens as output.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

34

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 34

Tokens, Patterns, and Lexemes

When discussing lexical analysis, we use three related but distinct terms:

Token is a pair consisting of a token name and an optional attribute value. The

token name is an abstract symbol representing a kind of lexical unit, e.g., a

particular keyword, or a sequence of input characters denoting an identifier. The

token names are the input symbols that the parser processes. In what follows, we

shall generally write the name of a token in boldface. We will often refer to a

token by its token name.

Pattern is a description of the form that the lexemes of a token may take. In the

case of a keyword as a token, the pattern is just the sequence of characters that

form the keyword. For identifiers and some other tokens, the pattern is a more

complex structure that is matched by many strings.

Lexeme smallest logical unit of a program,such as, A,B.1.0,true .

It is a sequence of characters in the source program that matches the pattern for a

token and is identified by the lexical analyzer as an instance of that token.

Lexical Errors

It is hard for a lexical analyzer to tell , without the aid of other components,that

there is a source-code error. For instance, if the string f i is encountered for the

first time in a C program in the context :

f i ( a == f (x) ) . . .

a lexical analyzer cannot tell whether f i is a misspelling of the keyword if or an

undeclared function identifier. Since f i is a valid lexeme for the token id, the

lexical analyzer must return the token id to the parser and let some other phase of

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

35

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 35

the compiler - probably the parser in this case - handle an error due to

transposition of the letters.

However, suppose a situation arises in which the lexical analyzer is

unable to proceed because none of the patterns for tokens matches any prefix of

the remaining input. The simplest recovery strategy is "panic mode" recovery.

We delete successive characters from the remaining input, until the lexical

analyzer can find a well-formed token at the beginning of what input is left. This

recovery technique may confuse the parser, but in an interactive computing

environment it may be quite adequate.

Other possible error-recovery actions are:

1. Delete one character from the remaining input.

2. Insert a missing character into the remaining input.

3. Replace a character by another character.

4. Transpose two adjacent characters.

Minimum distance error correction

It is the minimum number of corrections needed to convert an invalid

lexeme to valid one

It is the strategy generally followed by the lexical analyzer to correct the errors in

the lexemes.

Transformations like these may be tried in an attempt to repair the input.

The simplest such strategy is to see whether a prefix of the remaining input can

be transformed into a valid lexeme by a single transformation. This strategy

makes sense, since in practice most lexical errors involve a single character. A

more general correction strategy is to find the smallest number of transformations

needed to convert the source program into one that consists only of valid

lexemes, but this approach is considered too expensive in practice to be worth the

effort

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

36

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 36

Example

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

37

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 37

1. INPUT BUFFERING:

Lexical analyzer uses two pointer to read tokens.

lb (lexeme-beginning)-pointer that indicates the beginning of the lexeme

sp ( search -pointer) that keep track of the portion of the input string scanned.

lb sp

fig a. initial position of the pointers “lb” and “sp”

initially both pointers point to the beginning of alexeme (fig a). the search pointer

“sp” then starts scanning forward to search for the end of the lexeme.

The end of the lexeme. in this case is indicated by the blank space after “begin”

(fig b). the lexeme is indicated only when the sp scans the blank space after

“begin”.

lb sp

fig b. end of lexeme

when the end of the lexeme is identified, the token and the attribute corresponding to this

lexeme is returned. Lb and sp are then madeto point to the begining of the next token.

(fig.c)

lb sp

fig c. updation of pointers for the next lexeme

Begin I : = I + 1 ; J := J + 1 . . . . .

Begin I : = I + 1 ; J := J + 1 . . . . .

Begin I : = I + 1 ; J := J + 1 . . . . .

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

38

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 38

Reading the input character by character from secondary storage is costly. A block of data is

read first in to a buffer, and then scanned by the lexical analyzer. For this method buffering

methods are used.

Commonly used buffering methods are:

1. One buffer scheme: there are problem if a lexeme crosses the buffer boundary. To

scan the rest of the lexeme, the buffer has to be refilled thereby overwriting the first

part of the lexeme.

2. Two buffer scheme: here buffer 1 and buffer 2 are scanned alternatively. When the

end of the current buffer is reached, the other buffer is filled. Hence the problem

encounterd in the previous method is solved.

In this scheme, the second buffer is loaded when the first buffer becames full . similarly the

first buffer is filled when the second buffer is reached. Then the “sp” pointer is incremented.

Hence two tests have to be done to increment the “sp” pointer.This can be reduced to one

test if we include a sentinel character.

Sentinel Character: an extra character other than input characters is added at the end of

input buffer to reduce buffer tests. For example: EOF (end of file) character. So only if the

EOF is encountered, a second check is made as to which buffer has to be refilled and action

is performed. Hence the average number of tests per input character is 1.

Sentinels

For each character read, we make two tests: one for the end of the buffer, and one to

determine what character is read. We can combine the buffer-end test with the test for the

current character if we extend each buffer to hold a sentinel character at the end. The

sentinel is a special character that cannot be part of the source program, and a natural choice

is the character eof. Note that eof retains its use as a marker for the end of the entire input.

Any eof that appears other than at the end of a buffer means that the input is at an end.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

39

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 39

forward : = forward + 1;

if forward ↑ = eof then begin

if forward at end of first half then begin

reload second half;

forward := forward + 1

end

else if forward at end of second half then begin

reload first half;

move forward to beginning of first half

end

else /* eof within a buffer signifying end of input */

terminate lexical analysis

end

2. State the different compiler construction tools & their use. (6)

The compiler writer, like any software developer, can profitably use modern

software development environments containing tools such as language editors,

debuggers, version managers , profilers, test harnesses, and so on. In addition

to these general software-development tools, other more specialized tools have been

created to help implement various phases of a compiler.

These tools use specialized languages for specifying and implementing specific

Components, and many use quite sophisticated algorithms. The most successful

tools are those that hide the details of the generation algorithm and produce components

that can be easily integrated into the remainder of the compiler. Some commonly used

compiler-construction tools include

1. Parser generators that automatically produce syntax analyzers from a grammatical

description of a programming language.

2. Scanner generators that produce lexical analyzers from a regular-expression

description of the tokens of a language.

3. Syntax-directed translation engines that produce collections of routines for walking a

parse tree and generating intermediate code.

4. Code-generator generators that produce a code generator from a collection of rules for

translating each operation of the intermediate language into the machine language for a

target machine.

5. Data-flow analysis engines that facilitate the gathering of information about how

values are transmitted from one part of a program to each other part. Data-flow analysis

is a key part of code optimization.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

40

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 40

6. Compiler- construction toolkits that provide an integrated set of routines for

constructing various phases of a compiler.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

41

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 41

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

42

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 42

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

43

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 43

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

44

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 44

Explain briefly about grouping of phases? (5 marks)

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

45

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 45

Front end: machine independent phases

o Lexical analysis

o Syntax analysis

o Semantic analysis

o Intermediate code generation

o Some code optimization

Depends upon source file and it is independent of the target file or object program.

It includes the lexical analysis, syntax analysis, symbol table, intermediate code generation, small

amount of code optimization and also includes error handling and table management.

Back end

Depends on object language & independent of source file language.

Includes code optimization & code generation.

Machine dependent phases

Final code generation

Machine-dependent optimizations

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

46

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 46

5. Write short notes on compiler construction tools? (6 marks)

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

47

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 47

Specification of Tokens

Regular expressions are an important notation for specifying lexeme patterns.

Strings and Languages

An alphabet is a finite set of symbols.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

48

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 48

A string over an alphabet is a finite sequence of symbols drawn from that alphabet.

A language is any countable set of strings over some fixed alphabet.

In language theory, the terms "sentence" and "word" are often used as synonyms for "string."

The length of a string s, usually written |s|, is the number of occurrences of symbols in s.

For example, banana is a string of length six. The empty string, denoted ε, is the string of

length zero.

Operations on Language

L and M are languages

Union of L and M - L U M = { s | s is in L OR s is in M }

Intersection of L and M - L M = { s | s is in L AND s is in M }

Concatenation of L and M - LM = { st | s is in L and t is in M }

Exponentiation of the Language L is - Li = L Li-1

Kleene closure of L (Zero or more Concatenations)

L* = U Li

i=o

Positive Closure of L (One or more Concatenations)

L+ = U Li

i=1

Rules governing the languages

If L and M are 2 Languages, then

L U M = M U L

U L = L U

L = L =

If M has only an empty string ( ) in its alphabet set, then

{} L = L {} = L

Terms for Parts of Strings

The following string-related terms are commonly used:

1. A prefix of string s is any string obtained by removing zero or more symbols from the end

of s. For example, ban, banana, and e are prefixes of banana.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

49

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 49

2. A suffix of string s is any string obtained by removing zero or more symbols from the

beginning of s. For example, nana, banana, and e are suffixes of banana.

3. A substring of s is obtained by deleting any prefix and any suffix from s. For example,

banana, nan, and e are substrings of banana.

4. The proper prefixes, suffixes, and substrings of a string s are those, prefixes, suffixes, and

substrings, respectively, of s that are not ε or not equal to s itself.

4. A subsequence of s is any string formed by deleting zero or more not necessarily

consecutive positions of s. For example, baan is a subsequence of banana.

Regular Expressions

1. Each regular expression r denotes a language L(r).

2. Here are the rules that define the regular expressions over some alphabet Σ and the

languages that those expressions denote.

3. ε is a regular expression, and L(ε) is { ε }, that is, the language whose sole member is the

empty string.

4. If a is a symbol in Σ, then a is a regular expression, and L(a) = {a}, that is, the language

with one string, of length one, with a in its one position.

5. Suppose r and s are regular expressions denoting languages L(r) and L(s), respectively.

(r)|(s) is a regular expression denoting the language L(r) U L(s).

(r)(s) is a regular expression denoting the language L(r)L(s).

(r)* is a regular expression denoting (L(r))*.

(r) is a regular expression denoting L(r).

The unary operator * has highest precedence and is left associative.

Concatenation has second highest precedence and is left associative. | has lowest

precedence and is left associative.

A language that can be defined by a regular expression is called a regular set.

If two regular expressions r and s denote the same regular set, we say they are equivalent

and write r = s. For instance, (a|b) = (b|a).

Regular Expression Operations

Three Basic Operations

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

50

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 50

Choice among the alternates

Indicated by meta character |

RE = R |S

L ( R |S) = L (R) U L (S)

Concatenation – RS

L (RS) = L(R) L(S)

Repetition – Kleene Closure [ Finite Concatenation of Strings ]

R*

Precedence – Repetition, Concatenation, Choice

Rules for constructing RE over an alphabet

is a RE

If ‘a’ is a symbol in , then a is a regular expression

If ‘ r’ and ‘s’ are regular expressions then,

r | s is a RE

r s is RE

If ‘r’ is a regular expression then

r * is a RE

(r) is a RE

Axioms for RE

The operator | is

commutative- r | s = s| r

Associative – r |(s |t) = (r |s) | t = r | s |t

The operator ‘.’ is

Associative – r.(s.t) = (r.s) . T

Distributive – r (s|t)= rs | rt

r = r

r* r * = (r*)* = r* = rr* |

(r|s) * = (r*s*)* = (r*s*)r* = (r*|s*) *

rr* = r*r

(rs)*r = r(sr)*

Notational Shorthands

Certain constructs occur so frequently in regular expressions that it is convenient to

introduce notational shorthand’s for them.

BRANCH: CSE Y3/S5 SUBJECT: LANGUAGE TRANSLATORS

51

RAJIV GANDHI COLLEGE OF ENGINEERING & TECHNOLOGY/ DEPT. OF CSE Page 51

One or more instances (+)

- The unary postfix operator + means “ one or more instances of” .

- If r is a regular expression that denotes the language L(r), then ( r )+ is a regular expression

that denotes the language ( L (r ) )+

- Thus the regular expression a+ denotes the set of all strings of one or more a’s.

- The operator + has the same precedence and associativity as the operator *.

Zero or one instance ( ?)

- The unary postfix operator ? means “zero or one instance of”.

- The notation r? is a shorthand for r | ε.

- If ‘r’ is a regular expression, then ( r )? Is a regular expression that denotes the language L(

r ) U { ε }.

Character Classes.

- The notation [abc] where a, b and c are alphabet symbols denotes the regular expression a |

b | c.

- Character class such as [a – z] denotes the regular expression a | b | c | d | ….|z.

- Identifiers as being strings generated by the regular expression,

[ A – Z a – z ] [ A – Z a – z 0 – 9 ] *

Regular Set

- A language denoted by a regular expression is said to be a regular set.

Non-regular Set

- A language which cannot be described by any regular expression.

Eg. The set of all strings of balanced parentheses and repeating strings cannot be described

by a regular expression. This set can be specified by a context-free grammar.