Computer Organization LEcture 04

  • Upload
    skiloh1

  • View
    218

  • Download
    0

Embed Size (px)

Citation preview

  • 8/10/2019 Computer Organization LEcture 04

    1/37

    IT225: Computer Organizations

    August 27, 2014 (Wednesday)Lecture 4

  • 8/10/2019 Computer Organization LEcture 04

    2/37

    AssignmentsFor today

    2.1.5-2.1.6 (instruction-level parallelism, Processor-levelparallelism)2.2.1-2.2.2(memory)

    For Next week

    Read Ch2.2.3- 2.2.6 (memory)Review Homework #1 questionsHomework #2 small group presentations

  • 8/10/2019 Computer Organization LEcture 04

    3/37

    Processors

    a) Figure 2-1. The organization of a simple computer withone CPU and two I/O devices.

  • 8/10/2019 Computer Organization LEcture 04

    4/37

    CPU Organization

    The data path of a typical Von Neumann machine.

  • 8/10/2019 Computer Organization LEcture 04

    5/37

  • 8/10/2019 Computer Organization LEcture 04

    6/37

    Instruction Execution (3)

    a) Figure 2-3. An interpreter for a simple computer (written inJava).

    . . .

  • 8/10/2019 Computer Organization LEcture 04

    7/37

    Instruction Execution (4)

    a) Figure 2-3. An interpreter for a simple computer (written inJava).

    . . .

  • 8/10/2019 Computer Organization LEcture 04

    8/37

    Instruction Execution (5)

    Benefits of machines with interpreted instructions

    Ability to fix incorrectly implemented instructions in field,even make up for design deficiencies in basic hardware

    Opportunity to add new instructions at minimal cost,even after delivery of machine

    Structured design that permitted efficient development,testing, documenting of complex instructions

  • 8/10/2019 Computer Organization LEcture 04

    9/37

    Design Principles for Modern Computers

    a) All instructions directly executed by hardwareb) Maximize rate at which instructions are issuedc) Instructions should be easy to decode

    d) Only loads and stores should reference memorye) Provide plenty of registers

  • 8/10/2019 Computer Organization LEcture 04

    10/37

    Acronyms

    PC: Program Counter MAR: Memory Address Register MDR: Memory Data Register IR: Instruction Register ACC: Accumulator ALU: Arithmetic Logic Unit MM: Main Memory

    Ctrl or CU: Control Unit

  • 8/10/2019 Computer Organization LEcture 04

    11/37

    Little Man Computer

    The Little Man Computer (LMC ) is an instructional model ofa computer , created by Dr. Stuart Madnick in 1965. [1] TheLMC is generally used to teach students, because it models asimple von Neumann architecture computer - which has all ofthe basic features of a modern computer. It can beprogrammed in machine code (albeit in decimal rather thanbinary) or assembly code. [2][3][4]

    http://en.wikipedia.org/wiki/Model_%28abstract%29http://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Stuart_Madnickhttp://en.wikipedia.org/wiki/Little_man_computerhttp://en.wikipedia.org/wiki/Von_Neumann_architecturehttp://en.wikipedia.org/wiki/Little_man_computerhttp://en.wikipedia.org/wiki/Little_man_computerhttp://en.wikipedia.org/wiki/Little_man_computerhttp://en.wikipedia.org/wiki/Little_man_computerhttp://en.wikipedia.org/wiki/Von_Neumann_architecturehttp://en.wikipedia.org/wiki/Little_man_computerhttp://en.wikipedia.org/wiki/Stuart_Madnickhttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Model_%28abstract%29
  • 8/10/2019 Computer Organization LEcture 04

    12/37

    Copyright 2010 JohnWiley & Sons, Inc.

    1-12

  • 8/10/2019 Computer Organization LEcture 04

    13/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-13

  • 8/10/2019 Computer Organization LEcture 04

    14/37

  • 8/10/2019 Computer Organization LEcture 04

    15/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-15

  • 8/10/2019 Computer Organization LEcture 04

    16/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-16

  • 8/10/2019 Computer Organization LEcture 04

    17/37

  • 8/10/2019 Computer Organization LEcture 04

    18/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-18

  • 8/10/2019 Computer Organization LEcture 04

    19/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-19

  • 8/10/2019 Computer Organization LEcture 04

    20/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-20

  • 8/10/2019 Computer Organization LEcture 04

    21/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-21

  • 8/10/2019 Computer Organization LEcture 04

    22/37

  • 8/10/2019 Computer Organization LEcture 04

    23/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-23

  • 8/10/2019 Computer Organization LEcture 04

    24/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-24

  • 8/10/2019 Computer Organization LEcture 04

    25/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-25

  • 8/10/2019 Computer Organization LEcture 04

    26/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-26

  • 8/10/2019 Computer Organization LEcture 04

    27/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-27

  • 8/10/2019 Computer Organization LEcture 04

    28/37

    Copyright 2010 JohnWiley & Sons, Inc. 1-28

  • 8/10/2019 Computer Organization LEcture 04

    29/37

  • 8/10/2019 Computer Organization LEcture 04

    30/37

    1-30

    FYI, in real CPU, HLT instruction will be fetched indefinitely.

  • 8/10/2019 Computer Organization LEcture 04

    31/37

    CISC vs. RISC

    CISC: Complex Instruction Set Computer RISC: Reduced Instruction Set Computer

    https://www.youtube.com/watch?v=Nmeomd8EzhQ

    4

    https://www.youtube.com/watch?v=Nmeomd8EzhQhttps://www.youtube.com/watch?v=Nmeomd8EzhQhttps://www.youtube.com/watch?v=Nmeomd8EzhQhttps://www.youtube.com/watch?v=Nmeomd8EzhQ
  • 8/10/2019 Computer Organization LEcture 04

    32/37

    Chapter 4

    TheProcessor 32

    Pipelining Analogy

    a) Pipelined laundry: overlapping execution Parallelism improves performance

    4 . 5 A n

    Ov

    er v i ew

    of P i p

    el i ni n

    gFour loads:Speedup= 8/3.5 = 2.3

    Non-stop:

    Speedup= 2n/0.5n + 1.5 4 = number of stages

  • 8/10/2019 Computer Organization LEcture 04

    33/37

    Instruction-Level Parallelism

    a) A five-stage pipelineb) The state of each stage as a function of time. Nine clock

    cycles are illustrated

    Assumption: every stage should take equalamount of time, e.g., 1 clock cycle

    D i P i i l f M d C (i

  • 8/10/2019 Computer Organization LEcture 04

    34/37

    Design Principles for Modern Computers (i.e.,RISC architecture)

    All instructions directly executed by hardware Maximize rate at which instructions are issued Instructions should be easy to decode

    Only loads, stores should reference memory Provide plenty of registers

    More details of the design principles will be covered throughout the semester!

  • 8/10/2019 Computer Organization LEcture 04

    35/37

    Superscalar Architectures (1)

    Dual five-stage pipelines with a common instruction fetch unit.

    Definition of superscalar CPU: processors that issue multiple instructions

    in a single clock cycle

    Example: INTEL 80486 CPU

  • 8/10/2019 Computer Organization LEcture 04

    36/37

    Superscalar Architectures (2)

    A superscalar processor with five functional units.

  • 8/10/2019 Computer Organization LEcture 04

    37/37

    Conclusion

    Discuss Homework 2