The minimum number of arithmetic operations required to evaluate the polynomial

P(X) = X**Concept:**

To find the number of operations using a temporary variable, we need to simplify the equation to obtain bracket pairs and arithmetic signs.

**Explanation:**

P(X) = X^{5} + 4X^{3} + 6X + 5

P(X) = X (X^{4} + 4X^{2} + 6) + 5

P(X) = X (X (X^{3} + 4X) + 6) + 5

P(X) = X (X (X (X^{2} + 4)) + 6) + 5

P(X) = X (X (X (X (X) + 4)) + 6) + 5

__Method 1__ -

Let T be a temporary variable to store intermediate results.

- T = (X) * (X)
- T = T + 4
- T = (X) * (T)
- T = (X) * (T)
- T = T + 6
- T = (X) * T
- T = T + 5

Hence, 7 operations are used with one temporary variable.

__Method 2__ -

Counting number of bracket pairs gives the number of multiplication operations = 4

Counting number of + signs gives the number of addition operations = 3

Total operations = 7

Option 1 : Dynamic memory allocation

**Dynamic Memory Allocation:**

Performed during run time when a program executing and require a memory block.

**Type Checking:**

Check performed during the semantic analysis of compilation.

**Symbol Table:**

Management is to store and retrieve the information about tokens during compilation phase.

**Inline Expansion:**

Type of macro in pre-processor which happens before even compilation started.

Replace a function call by the body of respective function.

Hence option 1 is the correct answer.

Option 4 : software pipelining

- Out of Order Execution is also called Dynamic Execution. In this mode of execution, the instructions aren't executed precisely as per the program control flow.
- It is a technique used to get back some of that wasted execution bandwidth.
- With out-of-order execution, the processor would issue each of the instructions in program order, and then enter a new pipeline stage called "read operands" during which instructions whose operands are available would move to the execution stage, regardless of their order in the program.
- The term issue could be redefined at this point to mean "issue and read operands."
- The software pipelining is a method that is used to optimize loops, in a way that parallels hardware pipelining. It is out of order execution, except that the reordering is done by the compiler.

Consider the expression (a – 1) * (((b + c) / 3) + d). Let X be the minimum number of registers required by an optimal code generation (without any register spill) algorithm for a load/store architecture, in which (i) only load and store instructions can have memory operands and (ii) arithmetic instructions can have only register or immediate operands. The value of X is _____.

**Concept:**

Register Spilling means moving a variable from register to memory while moving a variable from memory to register is filling. Advantage of using a register rather than memory is fast processing.

**Explanation:**

Expression: (a – 1)*(((b + c) / 3) + d)

Load R_{0}, b // Load b into register R_{0}

Load R_{1}, c // Load c in register R_{1}

Add R_{1}, R_{0 }// R_{1 }= R_{1 }+ R_{0}

Now, R_{0 }register is free.

DIV R_{1}, 3 // R_{1} = R_{1 }/3

Load R_{0}, d // Load d in register R_{0}

ADD R_{1}, R_{0 }// R_{1 }= R_{1 }+ R_{0}

Load R_{0}, a // Load a in register R_{0}

SUB R_{0}, 1 // R_{0} = R_{0 }– 1

MUL R_{1}, R_{0}

So, minimum 2 registers are needed here.

Option 2 : Local optimization

**Concept:**

Peephole optimization is a technique for locally improving the target code which is done by examining a sliding window of target instructions and replacing the instruction sequences within the peephole by shorter or faster sequences wherever possible.

Constant folding is a peephole optimization.

int a = 3 + 9;

int a = 12 **\\constant folding**

**Important Point:**

The code in the peephole need not be contiguous although some implementations do require this. Some characteristics of peephole optimization are:

- Redundant instruction elimination
- Elimination of unreachable code
- Reduction in strength
- Algebraic simplifications
- Use of machine idioms

Consider evaluating the following expression tree on a machine with load-store Architecture in which memory can be accessed only through load and store instructions. The variables a, b, c, d and e are initially stored in memory. The binary operators used in this expression tree can be evaluated by the machine only when the operands are in registers. The instructions produce result only in a register. If no intermediate results can be stored in memory, what is the minimum number of registers needed to evaluate this expression?

Option 4 :

3

** Answer**: Option 4

** Explanation**:

Exp – Load R_{1}, a ; R_{1} ← M[a]

Load R_{2}, b ; R_{2} ← M[b]

Sub R_{1}, R_{2} ; R_{1} ← R_{1} - R_{2}

Load R_{2}, c ; R_{2}_{ }← M[c]

Load R_{3}, d ; R_{3}_{ }← M[d]

Add R_{2}, R_{3} ; R_{2}_{ } ← R_{2} + R_{3}

Load R_{3}, e ; R_{3}_{ }← M[e]

Sub R_{3}, R_{2}_{ }; R_{3}_{ } ← R_{3} - R_{2}

Add R_{1}, R_{3}_{ }; R_{1}_{ } ← R_{1} + R_{3}

Total 3 Registers are required minimum.

Option 1 : Loop optimization

The correct answer is Local optimization

__Key Points__

- Peephole optimization is an optimization technique performed on a small set of compiler-generated instructions; the small set is known as the peephole or window.
- Peephole optimization involves changing the small set of instructions to an equivalent set that has better performance.

**Example:**

- Instead of pushing register A onto the stack and then immediately popping the value back into register A, a peephole optimization would remove both instructions;
- Instead of adding A to A, a peephole optimization might do an arithmetic shift left.

__Important Points__

The code in the peephole need not be contiguous although some implementations do require this. Some characteristics of peephole optimization are:

- Redundant instruction elimination
- Elimination of unreachable code
- Reduction in strength
- Algebraic simplifications

Consider the following ANSI C code segment:

z = x + 3 + y -> f1 + y -> f2;

for (i = 0; i < 200; i = i + 2){

if (z > i) {

P = p + x + 3;

q = q + y -> f2;

} else {

p = p + y -> f2;

q = q + x + 3;

}

}

Assume that the variable y points to a struct (allocated on the heap) containing two fields f1 and f2, and the local variables x, y, z, p, g, and i are allotted registers. Common sub-expression elimination (CSE) optimization is applied on the code. The number of addition and dereference operations (of the form y -> f1 or y -> f2 ) in the optimized code, respectively, are:

Option 4 : 303 and 2

__ Answer__: Option 4

**Common Subexpression elimination **(CSE)

CSE is a technique in code Optimization, where we try to eliminate redundant expression in our code by simply calculating and storing in a temporary variable so that the expression does not need to evaluated again and again in the code/loop.

**Explanation**:

In the code given statement that are repeating in every iteration of loop are x+3 and y->f2; after performing common subexpression elimination we get

temp1 = x + 3

temp2 = y -> f1

_{temp3 }= y -> f2

z = temp1 + temp2 + temp3;

for (i = 0; i < 200; i = i + 2){

if (z > i) {

P = p + temp1;

q = q + temp3;

} else {

p = p + temp3;

q = q +temp1;

}

}

now we get **1 addition + 2 addition + 100 addition (i=i+2) + 2*100(if/else) = 303 additions and 2 deference operations**.

For a statement S in a program, in the context of liveness analysis, the following sets are defined:

USE(S): the set of variables used in S

IN(S): the set of variables that are live at the entry of S

OUT(S): the set of variables that are live at the exit of S

Consider a basic block that consists of two statements, S_{1} followed by S_{2}.

Which one of the following statements is correct?

Option 2 : OUT(S1) = IN(S2)

** Answer**: Option 2

** Concept**:

** Live variables**: A variable said to be live at some point/instance if from that point/instance to the exit of a program that variable is used ( only R-value Occurrence ) without being redefined( no L-value Occurrence ).

__ Explanation__:

__ Option 1__:OUT(S1) = USE(S1) ∪ IN(S2)

If a variable being **redefined** and **used** at the same statement S_{1} then it will **not** be live at OUT(S_{1}). But USE(S_{1}) will **add** those variables to the set OUT(S_{1}) as well.

So this is ** incorrect**.

** Option 2**: OUT(S1) = IN(S2)

This is ** Correct**. Every variable which is

** Option 3**: OUT(S1) = IN(S1) ∪ USE(S1)

Consider a variable which was **live before S _{1} but not after S_{1}** then

So this is __incorrect.__

** Option 4**:OUT(S1) = IN(S2) ∪ USE(S2)

Consider a variable which is being **redefined** and **used** at the same statement S_{2} then **USE(S _{2})** will be

So this is __Incorrect.__

Option 1 : they enhance the portability of the compiler to other target processors

- The main purpose of doing some code-optimization on intermediate code generation is to enhance the portability of the compiler to target processors

**Important points:**

Program analysis is more accurate on intermediate code than on machine code. It is also correct but the most optimal answer is 1.

Option 4 : x = 4 * 5 ⇒ x = 20 is an example of common subexpression elimination

__Option 1: __True

Since, there are jump statements or branching possible in a basic block, it is a sequence of instructions where control follows in a linear fashion entering from beginning and exiting from end.

__Option2: __True

Common Subexpression Elimination is an optimization that searches for instances of identical expressions, and replaces them with a single variable holding the computed value. Hence, available expression analysis can be used for common subexpression elimination.

__Option_3: __– True

Dead code elimination is a compiler optimization to remove code which does not affect the program results. Live variable analysis is a classic data-flow analysis to calculate the variables that are live at each point in the program. Hence, is efficient for Dead code elimination.

__Option 4: False__

Common subexpression elimination (CSE) refers to compiler optimization replaces identical expressions (i.e., they all evaluate to the same value) with a single variable holding the computed value when it is worthwhile to do so.

Which of the following steps are followed by HIS during synthesis?

1. Data model generation

2. Data flow analysis

3. Scheduling and allocation

4. Data path optimization

5. Control optimizationOption 4 : 1, 2, 3, 4 and 5

High-level IBM synthesizes (HIS) are capable of synthesizing VHDL descriptions. Synthesis is performed uniformly.

The main synthesis steps in HIS is shown below:

Therefore the correct sequence is:

- Data model (Graph) generation
- Data flow analysis
- Scheduling and allocation
- Data path optimization
- Control optimization

Option 1 : It is applied to small part of the code and applied repeatedly

**Concepts:**

Peephole optimization is an optimization technique performed on a small set of compiler-generated instructions; the small set is known as the peephole or window.

**Important Points:**

An optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program.

Common requirements are to minimize a program's execution time, memory requirement, and power consumption.Consider the code segment

int i, j, x, y, m, n;

n=20;

for (i = 0, i < n; i++)

{

for (j = 0; j < n; j++)

{

if (i % 2)

{

x + = ((4*j) + 5*i);

y += (7 + 4*j);

}

}

}

m = x + y;

Which one of the following is false?

Option 4 : There is scope of dead code elimination in this code

**Optimized code:**

int i, j, x, y, m, n;

n=20;

for (i = 1, i < n; i = i+2)

{

for (j = 0; j < n; j++)

{

x + = ((2 << j) + 5*i);

y += 4*j;

}

y += 7*(j-1);

}

}

m = x + y;

**Explanation:**

In the code given, there is no scope for dead code elimination, but there is the scope for other three optimization methods.