Amdahl’s law can be applied in contexts other than parallel processing. Suppose that a numerical application consists of 20% floating-point and 80% integer/control operations (these are based on operation counts rather than their execution times). The execution time of a floating-point operation is three times as long as other operations. We are considering a redesign of the floating-point unit in a microprocessor to make it faster.
- Formulate a more general version of Amdahl’s law in terms of selective speed-up of a portion of a computation rather than in terms of parallel processing.
Amdahl’s Law states:
Execution time after improvement = (Execution time affected by improvement)/ (Amount of Improvement) + Execution time unaffected
Assuming initially that the floating point multiply, floating point divide and the other instructions had the same CPI,
Execution time after Improvement with Divide = (20)/3 + (25 + 80) = 111.66
Execution time after Improvement with multiply = (25)/8 + (20 + 80) = 103.25
The management’s goal can be met by making the improvement with multiply alone.
- How much faster should the new floating-point unit be for 25% overall speed improvement?
Suppose N = 100, where is B = 25%
Overall speed improvement would be by factor of = 0.0403
100/25% = 400 cycles.
- What is the maximum speed-up that we can hope to achieve by only modifying the floating-point unit?
Let x be percentage of floating point instructions. Since the speedup is 25%, if the original program executed in 100 cycles, the new program runs in 100/25% = 400 cycles.
(100)/25% = (x)/15 + (100 – x)
Solving for x, we get:
x = 11.42
The percent of floating point instructions need to be 11.42