23
1 Foundations of Software Design Fall 2002 Marti Hearst Lecture 4: Operating Systems

How Computers Work: CPU cont, the Memory Hierarchy, Programs

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: How Computers Work: CPU cont, the Memory Hierarchy, Programs

1

Foundations of Software DesignFall 2002Marti Hearst

Lecture 4: Operating Systems 

 

Page 2: How Computers Work: CPU cont, the Memory Hierarchy, Programs

2

Today

• Revisit instructions and the CPU• Measuring CPU Performance• The Memory Hierarchy• Operating Systems

– Programs vs. Processes– Process Scheduling– Deadlock– Memory Management

Page 3: How Computers Work: CPU cont, the Memory Hierarchy, Programs

3

Instructions and Their Representation

Images from http://courses.cs.vt.edu/~csonline/MachineArchitecture/Lessons/

Page 4: How Computers Work: CPU cont, the Memory Hierarchy, Programs

4

The CPU

Images from http://courses.cs.vt.edu/~csonline/MachineArchitecture/Lessons/

Bus: bundles of wires connecting things.

Instruction Register: holds instructions.

Program Counter: points to the current memory location of the executing program.

Decoder: uses the 3-bit opcode to control the MUX and the ALU and the memory buses.

MUX: multiplexer – chose 1 out of N

ALU: adds, subtracts two numbers

Accumulator: stores ALU’s output

RAM: stores the code (instructions) and the data.

Page 5: How Computers Work: CPU cont, the Memory Hierarchy, Programs

5

Machine code to add two numbers that are stored in memory (locations 13 and 14) and then store the result into memory (location 15).

The Program Counter starts at memory location 0.

Image adapted from http://courses.cs.vt.edu/~csonline/MachineArchitecture/Lessons/

Page 6: How Computers Work: CPU cont, the Memory Hierarchy, Programs

6Images from http://courses.cs.vt.edu/~csonline/MachineArchitecture/Lessons/

The Program Counter starts at memory location 0.

(Notice: the instructions and the data are both stored in the RAM.)

This instruction moves along the blue bus into the instruction register.

The decoder checks the number bit to see how to handle the opcode.

For each instruction, we have to do something with the operand.

Page 7: How Computers Work: CPU cont, the Memory Hierarchy, Programs

7Images from http://courses.cs.vt.edu/~csonline/MachineArchitecture/Lessons/

If the opcode number bit = 0, we want the operand to be used as a memory address.

So the decoder switches the MUX to connect the operand to the RAM.

If the opcode bit = 1, the MUX would connect the operand to the ALU instead.

The green bus does the addressing of the RAM.

The blue bus does the reading to and writing from the RAM.

Page 8: How Computers Work: CPU cont, the Memory Hierarchy, Programs

8Images from http://courses.cs.vt.edu/~csonline/MachineArchitecture/Lessons/

The Decoder tells the ALU this is a LOAD instruction.

The memory location 13 is activated, and its value placed on the blue bus. This goes through the multiplexer into the ALU. It also trickles down into the Accumulator (Load is implemented as an ADD value to 0).

The last step (always, except for a jump) is for the Program Counter to get incremented. Now it points to memory location 1.

Page 9: How Computers Work: CPU cont, the Memory Hierarchy, Programs

9Images from http://courses.cs.vt.edu/~csonline/MachineArchitecture/Lessons/

Instruction: ADD 14Similar to Load 13, except the ALU is given the ADD operation.

The contents of the Accumulator becomes the lefthand operand, and the contents of memory becomes the righthand operand.

Results of the ADD end up in the Accumulator.

The Program Counter increments.

Page 10: How Computers Work: CPU cont, the Memory Hierarchy, Programs

10Images from http://courses.cs.vt.edu/~csonline/MachineArchitecture/Lessons/

Instruction: STORE 15

Memory address 15 is activated via the green bus. (Because the decoder sets to MUX to connect to the memory bus.)

The contents of the Accumlator are written to RAM using the blue bus.

The Program Counter increments.

The next instruction is HALT.

Page 11: How Computers Work: CPU cont, the Memory Hierarchy, Programs

11Images from http://courses.cs.vt.edu/~csonline/MachineArchitecture/Lessons/

There are other nice examplesin the online simulation.

They show the use of explicit numbers (represented by # signs),along with memory addresses.

Page 12: How Computers Work: CPU cont, the Memory Hierarchy, Programs

12

CPU Performance

• CPU cycles are not the whole story.• The number of INSTRUCTIONS per second play a

big role.– Each operation on the computer requires some number

of instructionsEach instruction takes some number of cycles to execute.

– So … a CPU with an instruction set and circuit design that requires fewer cycles to get operations done might have better performance than a CPU with a faster clock but more complicated circuitry.

Page 13: How Computers Work: CPU cont, the Memory Hierarchy, Programs

13

Measuring CPU Performance

• MIPS: – millions of instructions per second– “meaningless indicator of performance” (a joke)– Some instructions require more time than others– No standard way to measure MIPS

• FLOPS:– floating-point operations per second– Measures the speed of the floating point unit

• A special circuit that does math on real numbers• So, doesn’t measure performance on non-numeric applications

• Benchmark:– A test used to compare performance of hardware and/or software.– Different benchmarks measure different aspects of performance

• SPEC:– Standard Performance Evaluation Corporation– A set of standardized tasks (benchmarks) that CPUs are run on in order to

assess their performance.

Adapted form http://webopedia.internet.com/

Page 14: How Computers Work: CPU cont, the Memory Hierarchy, Programs

14

CPU Performance

CPU clocks are related to another commonly used term…

Real Time– Means regular clock time as opposed to CPU clock time– Other factors also prevent a program from running in real

time• CPU scheduling, discussed in today’s lecture

In common usage today,– Real time means right now, as I’m talking, as opposed to– Offline, when I have to think on my own, or when we’re

not in the middle of a timed event with others listening.

Page 15: How Computers Work: CPU cont, the Memory Hierarchy, Programs

15

The M

em

ory

Hie

rarc

hy

Image from http://www.howstuffworks.com/computer-memory1.htm

Page 16: How Computers Work: CPU cont, the Memory Hierarchy, Programs

16

Types of Computer Memory

• Characteristics– Volatile vs. Nonvolatile– Solid State vs. Mechanical– Read only vs. Read-write (RW)– Expense– Speed– Storage capacity (a function of size & expense)

Page 17: How Computers Work: CPU cont, the Memory Hierarchy, Programs

17

Types of Computer Memory• Registers (made up of flip-flops-like circuits)

– Volatile, small, expensive, fast, solid state, RW• DRAM (grid format using capacitance)

– Connected to the rest of the CPU via a memory bus– Volatile, now large, expensive, fast, solid state, RW

• ROM (grid format using diodes)– Nonvolatile, Read-only, cheap, fast

• Hard Disk– Nonvolatile, mechanical, RW, cheap, slow

• Flash Memory (grid format using fancy electronics)– Nonvolatile, solid state, RW, expensive, fast, small– Versus hard disk: silent, lighter, faster … but expensive!

• Flash RAM (e.g., car radio presets)– Like Flash memory, but needs power

• Other external storage: CD-ROM, Floppy, Zip

Adapted from http://www.howstuffworks.com/computer-memory.htm

Page 18: How Computers Work: CPU cont, the Memory Hierarchy, Programs

18

Caching

• Caches store items that have been used recently for faster access

• Based on the notion of locality– Something used recently is

more likely to be used again soon than other items.

• Can be done using– Registers– RAM– Hard Disk, etc

Image from http://cne.gmu.edu/modules/vm/orange/realprog.html

Page 19: How Computers Work: CPU cont, the Memory Hierarchy, Programs

19

Types of Computer Memory

• Cache– The term refers to how the memory is used– Can be volatile or on disk

• L1 cache - Memory accesses at full microprocessor speed (10 nanoseconds, 4 kilobytes to 16 kilobytes in size)

• L2 cache - Memory access of type SRAM (around 20 to 30 nanoseconds, 128 kilobytes to 512 kilobytes in size)

• Main memory - Memory access of type RAM (around 60 nanoseconds, 32 megabytes to 128 megabytes in size)

• Hard disk - Mechanical, slow (around 12 milliseconds, 1 gigabyte to 10 gigabytes in size)

• Internet - Incredibly slow (between 1 second and 3 days, unlimited size)

Adapted from http://www.howstuffworks.com/computer-memory.htm

Page 20: How Computers Work: CPU cont, the Memory Hierarchy, Programs

20

Levels of a Memory Hierarchy

http://www.ece.arizona.edu/~ece369/lec10S01/sld047.htm

Page 21: How Computers Work: CPU cont, the Memory Hierarchy, Programs

21

Operating Systems

• Main Roles:– Allow user applications to run on a given machine’s

hardware– Act as a “traffic cop” for allocating resources.

• Resources– CPU, I/O devices, Memory, Files, Allocation

• Processes– versus Programs– Scheduling– Synchronization

• Memory Management• File Management

Image from http://courses.cs.vt.edu/~csonline/OS/Lessons/Introduction/index.html

Page 22: How Computers Work: CPU cont, the Memory Hierarchy, Programs

22

Programs vs. Processes

• Program– The code, both in human and machine-readable form

• Process– A running program is a process.– It is allocated a number of resources:

• Registers in the CPU• Some part of the CPU’s RAM• I/O devices (the monitor, files in the file system)

• Several copies of the same program can be running as different processes.– But data values, current Program Counter location,

etc., will differ among the different processes.

Page 23: How Computers Work: CPU cont, the Memory Hierarchy, Programs

23

Programs vs. Processes• How does this relate to Object Oriented Programming?• An important difference:

– Class definition vs.– Object creation / instantiation

• Class definitions are created at compile time.• Objects don’t really get created until the program is

running as a process.– The “new” command in java tells the operating system to

allocate some memory and put the new object in that space.• When the process ends,

– the class definitions do not disappear, but– the objects disappear.

meaning that the memory allocated to the objects gets reallocated for some other purpose.