34
(c) DAIMI - OOSS Henrik Bærbak Christens en 1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

Page 1: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 1

Architectural Evaluation

Overview, Questioning Techniques

Page 2: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 2

Literature

  Main– [Bass el al., 2003] chap 11 (ATAM)

• Bass, L., Clements, P., Kazmann, R. Software Architecture in Practice. Addison-Wesley 2003.

– [Barbacci et al., 2003] • Barbacci, M., Ellison, R., Lattanze, A., Stafford, J.,

Weinstock, C., and Wood, W. (2003). Quality Attribute Workshops, 3rd Edition. Technical Report CMU/SEI-2003-TR-016, Software Engineering Institute.

– [Bardram et al., 2004] • Bardram, J., Christensen, H. B., and Hansen, K. M. (2004).

Architectural Prototyping: An Approach for Grounding Architectural Design and Learning. In Proceedings of the 4th Working IEEE/IFIP Conference on Software Architecture (WICSA 2004), pages 15-24, Oslo, Norway.

Page 3: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 3

A Simplified Development Process

  NB! Iterative, incremental, parallel, … process

FunctionalRequirements

QualityRequirements

SoftwareArchitecture

ArchitecturalRequirements

Implementation

?

Page 4: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 4

Rationale for Architectural Evaluation

  Software architecture allows or precludes almost all system quality attributes– Need to evaluate impact of architectural design on quality

attributes– Should be possible…

  Architecture needs to be designed early– E.g., prerequisite for work assignment

  Architecture embodies fundamental design decisions– Hard/costly to change these– The later the change, the costlier it is

  Cheap techniques for architectural evaluation exist

Page 5: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 5

When?

  Architecture designed, design not implemented– I.e., early in a cycle in an iteration– E.g., “We have designed this architecture will it possibly fit the quality

requirements?”

  Early– Evaluate design decisions

• Either design decisions made or considered• Architecture description may not be available

– Completeness and fidelity product of state of architectural description– E.g., “Will any architecture meet these quality requirements?”

• “You can have any combination of features the Air Ministry desires, so long as you do not also require that the resulting airplane fly” (Messerschmitt)

  Late– Evaluate architecture for existing system

• Possible redesign, evolution, …

– E.g., “We need to integrate with this product, will it be suitable in our architecture?”

Page 6: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 6

Who?

  Roles– Evaluation team

• Preferably not process staff– Objectivity reasons…

– Project stakeholders• Articulate requirements• “Garden variety” and decision makers• Producers

– Software architect, developer, maintainer, integrator, standards expert, performance engineer, security expert, project manager, reuse czar

• Consumers– Customer, end user, application builder (for product line)

• Servicers– System administrator, network administrator, service representatives

  Needs depend on the actual type of evaluation

Page 7: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 7

What?

  Probe of suitability of architecture(s)– Will system meet quality goals?– Will it still be buildable?

  Context-specific– E.g., modifiability depends on modification scenarios

  Need articulated goals– Major part of evaluation process– Particularly in large, organizationally complex projects

  Most often not scalar or results– “00, 03, … , 13”– Often focus on finding risks

• Which decisions affect which qualities?

Page 8: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 8

Questioning Techniques

  Analytical– Any project area, typically review– Any state of completeness

  Questionnaires  Checklists  Scenario-based methods

Page 9: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 9

Overview

Check-lists / Questionnaires

Page 10: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 10

Questionnaires

  List of relatively open questions– Apply to all architectures

  Process and product questions– Cf. Capability Maturity Model (CMM) for software

  E.g., Kruchten

Page 11: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 11

Checklists

  Detailed questions– Based on experience– Domain-specific

  E.g., AT&T

Page 12: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 12

Overview

Scenario-based methods

Page 13: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 13

Scenario-Based Methods

  Scenarios– Description of interaction with system from the point of with of a stakeholder

• System-specific – developed as part of project• Cf. quality-attribute scenarios as in [Bass et al., 2003]

  Types– Quality Attribute Workshops (QAW)

• Structured way of involving stakeholders in scenario generation and prioritization– Architecture Tradeoff Analysis Method (ATAM)

• Creates utility trees to represent quality attribute requirements• Analysis of architectural decisions to identify sensitivity points, tradeoffs, and risks

– Software Architecture Analysis Method (SAAM)• Brainstorming of modifiability and functionality scenarios• Scenario walkthrough to verify functionality support and estimate change costs

– Active Reviews for Intermediary Designs (ARID)• Active Design Review of software architecture

– E.g., “Is the performance of each component adequately specified” vs. “For each component, write down its maximum execution time and list the shared resources that it may consume”

– Survivable Network Analysis method (SNA)• Focus on survivability as quality attribute• Process

– Determine essential components based on essential services and assets– Map intrusion scenarios onto architecture to find “soft-spot” components (essential, but vulnerable)– Analyze these wrt

» Resistance» Recognition» Recovery

of/from attacks

Page 14: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 14

Quality Attribute Workshops (QAWs)

  Facilitated method that engages stakeholders in discovering and prioritizing quality attribute requirements– [Barbacci, 2003] – Typically started before architecture is designed/finished

  Steps1. QAW Presentation and Introductions2. Business/Mission Presentation3. Architectural Plan Presentation4. Identification of Architectural Drivers5. Scenario Brainstorming6. Scenario Consolidation7. Scenario Prioritization8. Scenario Refinement

Page 15: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 15

QAW Steps (2)

  1. Presentation and Introductions  2. Business/Mission Presentation

– Stakeholders present drivers for system• E.g., profitability, time-to-market, better quality of service, …

– High-level functional requirements, constraints, quality attribute requirements

  3. Architectural Plan Presentation– How will business/mission drivers be satisfied?– Key technical requirements

• E.g., mandated technical decisions– Existing, preliminary architectural descriptions

  4. Identification of Architectural Drivers– Keys to realizing quality attribute goals of system

• E.g., key requirements, business drivers, quality attributes– Distilled version of steps 2. and 3.

Page 16: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 16

QAW Steps (3)

  5. Scenario Brainstorming– Goal

• Come up with as many well-formed quality attribute scenarios as possible

• Stimulus, environment, response– Participants

• Come up with quality attribute scenarios• No critique as such, only clarification

questions– Facilitator

• Write scenarios on whiteboard• Ensure that scenarios are usable

– “The system shall be modifiable” vs. “The user interface of … is changed to different look & fell in two person days”

• Make sure architectural drivers are covered– Either fixed time period or whenever

participants run out of good ideas• Usually easy to create 30-40 scenarios

Page 17: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 17

QAW Steps (4) – Scenario Types

  Use case scenarios– Expected use of system

  Growth scenarios– Anticipated changes to system

  Exploratory scenarios– Unanticipated stresses to system

Page 18: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 18

QAW Steps (5)

  6. Scenario Consolidation– Merge similar scenarios– Requires stakeholder agreement –

majority consensus if need be  7. Scenario Prioritization

– Each stakeholder has N = ceil(#scenarios * 0,3)

votes on consolidated scenarios– Round-robin voting

• Two passes• Each pass: allocate half of votes

– Resulting count = prioritization• Typically high, medium, low priority

  8. Scenario Refinement– Develop high priority scenarios

according to scheme of [Bass et al., 2003]

• Stimulus, source of stimulus, environment, artifact stimulated, response measure

• Describe relevant quality attributes• Elicitate questions and issues

Page 19: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 19

QAW Results

  Outputs– List of architectural drivers– Raw scenario list– Prioritized scenario list– Refined scenarios

  Uses– Update architectural vision, refine requirements– Guide design and development– Facilitate further analysis/evaluation

• E.g., ATAM

Page 20: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 20

Architecture Tradeoff Analysis Method (ATAM)

  Context– “Large US public projects”

• Government, military, space• DoD, NASA, …

  Goals– Reveal how well architecture satisfies quality goals– Insight into tradeoffs among quality attributes– Identify risks

  Structured method– Repeatable analysis

• Should yield same results when applied twice– Guides users to look for conflicts and resolutions

  Parts– Presentation– Investigation– Testing– Reporting

Page 21: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 21

ATAM Steps (1) – Investigation and Analysis

  Presentation– 1. Present the ATAM

• Evaluation leader describes method…

– 2. Present Business Drivers• Most important functions• Relevant constraints

– Technical, managerial, economic

• Business goals• Major stakeholders• Architectural drivers

– Major quality attribute goals, which shape architecture

– 3. Present the Architecture• In terms of views…• Focus on how drivers are met

Page 22: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 22

ATAM Steps (2) – Investigation and Analysis

  4. Identify Architectural Approaches Used– Architect names approaches used

• Approach = here, set of architectural decisions, e.g., tactics that are used

– Goal• Eventually match desired qualities and decisions

Page 23: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 23

ATAM Steps (3) – Investigation and Analysis

  5. Generate the Quality Attribute Utility Tree– Decision managers refine

most important quality attribute goals

• Qualities: Performance– Refinement: Data

latency» Scenario: Deliver

video in real-time

– Specify prioritization• Importance with

respect to system success

– High, Medium, Low

• Difficulty in achieving– High, Medium, Low

• (H,H), (H,M), (M,H) most interesting

Page 24: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 24

ATAM Steps (4) – Investigation and Analysis

  6. Analyze Architectural Approaches– Match quality requirements in utility tree

with architectural approaches• How is each high priority scenario realized?• Quality- and approach-specific questions

asked – e.g., well-known weaknesses of approach

– Decision makers identify• Sensitivity points

– Property of component(s) critical in achieving particular quality attribute response

– E.g., security: level of confidentiality vs. number of bits in encryption key

• Tradeoff points– Property that is sensitivity point for more

than one attribute– E.g., encryption level vs. security and

performance, in particular if hard real-time guarantees are required

• Risks– Potentially unacceptable values of

responses

Page 25: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 25

Page 26: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 26

ATAM Steps (6) – Testing

  7. Brainstorm and Prioritize Scenarios– Akin to QAWs

• Larger groups of stakeholders• Place results in utility tree

  8. Analyze Architectural Approaches  9. Present Results

– Outputs• Documented architectural approaches• Set of scenarios and prioritizations• Utility tree• (Non)risks• Sensitivity and tradeoff points

Page 27: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 27

ATAM and QAW – Discussion

  How to ensure completeness?– And thus eventually suitability of architecture

evaluated

  Does process quality => product quality?  How do the approaches fare in an iterative

setting?– Need to rework scenarios etc. iteratively…

Page 28: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 28

Overview

Measuring techniques

Page 29: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 29

Measure Techniques

  Prerequisite– Artifacts to do measurements on…

  May answer specific quality attribute scenarios– Cf. experimental vs. exploratory prototyping

  Types– Metrics– Simulation, prototypes, experiments– Rate-Monotonic Analysis– ADL-based

Page 30: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 30

Metrics

  Quantitative interpretation of observable measurement of software architecture

– Cf. [ISO, 2001]  E.g., measuring complexity to predict

modifiability– Real-time object-oriented telecommunication

systems• Number of events reacted to• Number of {asynchronous, synchronous} calls

made• Number of component clusters

– Object decompositions units, e.g., car as wheels, transmission, steering, …

• Depth of inheritance tree• …

  E.g., flow metrics to predict reliability– Call-graph-metrics

• Number of modules used by module• Total calls to others modules• Unique calls to other modules

– Control-flow metrics• Number of if-then conditional arcs• Number of loop arcs

given control-flow graph

Page 31: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 31

Simulations, Prototypes, Experiments

  Architectural prototyping: [Bardram et al., 2004]

– Exploratory: finding solutions

– Experimental: evaluating solutions

Page 32: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 32

Rate-Monotonic Analysis

  Static, quantitative analysis– Ensuring dependability in hard

real-time systems  Preemptive multitasking, execute

task with highest priority  Basic idea

– Assign priority to each process according to its execution time

• The shorter the execution time, the higher the priority

  RMA ensures schedulability of processes

– No process misses execution deadline

– Worst-case schedule bound: W(n) = n * (2^(1/n) - 1)

• ln 2, n -> oo

  Guarantees schedulability if concurrency model is observed in implementation

Page 33: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 33

Automated Tools

  UML-based– Simulation– Semantic checks– Code generation to ensure

conformance– …

Page 34: (c) DAIMI - OOSSHenrik Bærbak Christensen1 Architectural Evaluation Overview, Questioning Techniques

(c) DAIMI - OOSS Henrik Bærbak Christensen 34

Summary

  Wide range of evaluation approaches exist– Questioning techniques– Measuring techniques

  Establishing suitability of architecture as main goal– Does architecture fulfill quality goals?– Is architecture buildable within project constraints?

  Utility of approaches dependent on context– Project state– Expertise– Tools used– Domain– …