Upload
lucy-chambers
View
219
Download
0
Embed Size (px)
Citation preview
Policy-based Policy-based CPU-scheduling in CPU-scheduling in
VOsVOs
Catalin Dumitrescu, Mike Wilde, Ian Catalin Dumitrescu, Mike Wilde, Ian FosterFoster
Some Background (Grid-style Monitoring)
Some Background (Policy-based Sched)
Introduction
●For example, in the sciences, there may be hundreds of institutions and thousands of individual investigators that collectively control tens or hundreds of thousands of computers and associated storage systems ●Each individual institution may participate in, and contribute resources to, multiple collaborative projects that can vary widely in scale, lifetime, and formality
VO-A VO-B
V-QueueV-Queue
S-QueueS-QueueS-Queue
Site 3Site 2Site 1
Initial Model
●Assumption: Participants may wish to delegate to one or more VOs the right to use certain resources subject to local policy (and service level agreements), and each VO then wishes to use those resources subject to VO policy
●How are such local and VO policies to be expressed, discovered, interpreted, and enforced?
VerifierVerifier
VO-A
VO-B
S-PEP S-PEP S-PEP
V-PEP
V-QueueV-Queue
S-QueueS-QueueS-Queue
Site 3Site 2Site 1
V-PEP
Talk Overview●Part I: Model Detailing
●Architecture / Model description ●Policy language definition (syntax&semantics)
●Part II: Specific work ●Policy case scenarios (research focus) ●Algorithms ●Simulator & Simulated model ●Dimension identification
●Part III: Simulations & Implementation issues ●Simulation results ●Related work ●“Rolling out in Atlas/Grid3”
●Future work & Conclusions
Simplified Model ●Composed of:
●R = compute resource, (several individual compute elements) ●M = associated manager, (designed to control resource R's states) ●{P} = policy set, (a finite list of intents expressed by administrators)
●Some Rules: ●M is authorized and responsible for enforcing {P} with respect to R ●{P} is composed only of rules that have direct correlation with R
● And the Mapping to a Concrete Case Scenario: ●a cluster with 1 head-node and several worker-nodes ●compute resources are managed locally by one or several pooling and/or queuing software managers (e.g., Condor, PBS, LSF)
Extended Model ●Composed of:
●G = several sites, where each of them is of type S ●M = associated manager(s), (designed to control S's states by delegation) ●{P} = policy set, a finite list of intents expressed by administrators
●Some Rules:
●M is authorized and responsible to distribute{P} with respect to R to R ●{P} is composed of rules that have direct correlation with G ●distributed {PR} is composed only of rules that have direct correlation with R
●And the Mapping to Concrete Case Scenario: ●a set of clusters of type S ●the monitoring and policy mechsnisms are VO-Centric Ganglia (as prototype)
Refined Prototype Model
Policy Language Definition ●Two types of policies:
●absolute: its arguments are mapped to VOs and site resources ●relative: its arguments are mapped to groups and VOs' resources
●Two types of contraints (open problem regarding enforcement):
●long term hard limitations and short term soft limitations ●identified by position in the presented syntax
●Proposed interpretations (to avoid ambiguities):
●long term hard limitations ●averaged over period (at most): sites provide (if requested) at most the specified fraction over the specified time interval
●short term soft limitations
●upper limits over period (a maximum): sites may provide up to the specified fraction over the specified time interval, (but without any guarantee in place)
Simple/Proposed Syntax●Two identifiable forms:
●absolute policy: ●resource (RESOURCE, ENTITY, LIST_POLICY) ●Examples:
●resource (R, V1, [(year, 20), (5minutes, 60)]) ●resource (R, V2, [(year, 80), (5minutes, 90)])
●relative policy:
●subset (RESOURCE, ENTITY, GROUP, LIST_POLICY) ●Examples:
●subset (R, V1, G1, [(year, 30), (5minutes, 100)]) ●subset (R, V1, G2, [(year, 70), (5minutes, 100)])
●Note: definitions and examples are independent (i.e., R has different interpretations in the two examples)
Motivation for the Language
●Users can burn their allocation faster or slower, controlled by the two limits
●Possible to map to site RMs with node allocation policies (e.g., Condor, OpenPBS & LSF)
Policy Case Scenarios ● Case 1
99%
80%
20%
60%
90%
VO1
VO2
Policy Case Scenarios ● Case 2
99%
80%
20%
60%
90%
VO1
VO2
Implemented Algorithms (Site)for each Gi with EPi, BPi, BEi do
# Case 1: fill BPi + BEi if (Sum(BAj) == 0) & (BAi < BPi) & (Qi has jobs) then schedule a job from some Qi to the least loaded site
# Case2: BAi<BPi (resources available) else if (SUM (BAk) < TOTAL) & (BAi < BPi) & (Qi has jobs) schedule a job from some Qi to the least loaded site
# Case 3: fill EPi (resource contention) else if (sum(BAk) == TOTAL) &
(BAi < EPi) & (Qi exists) then if (j exists such that BAj >= EPj) then stop scheduling jobs for VOj # Need to fill with extra jobs? if (BAi < EPi + BEi) then schedule a job from some Qi to the least loaded site
# ??if (EAi < EPi) & (Qi has jobs) then schedule additional backfill jobs
Implemented Algorithms (VO)for each VOi with EPi
# Case 1: fill BPi if (Sum(BAj) == 0) & (BAi < BPi) & (Qi has jobs) then release a job from some Qi
# Case 2: BAi < BPi (resources available) else if (Sum(BAk) < TOTAL) & (BAi < BPi) & (Qi has jobs) then release a job from some Qi
# Case 3: fill EPi (resource contention) else if (Sum(BAk) == TOTAL) & (BAi < EPi) & (Qi has jobs) then if (j exists such that BAj >= EPj) then stop scheduling jobs for VOj
Simulations
●Structures: ●2 VOs * 2 groups * 1 planner with 3 clusters ●6 VOs * 3 groups * 2 planners with 10 clusters
●Model: ●S-PEP:
●conitnuos monitoring ●jobs control by sending high level commands to RMs
●V-PEP: ●gatekeeper type (access control point)
Talk Overview●Part I: Model Detailing
●Architecture / Model description ●Policy language definition (syntax&semantics)
●Part II: Specific work ●Policy case scenarios (research focus) ●Algorithms ●Simulator & Simulated model ●Dimension identification
●Part III: Simulations & Implementation issues ●Simulation results ●Related work ●“Rolling out in Atlas/Grid3”
●Future work & Conclusions
Initial Simulations (settings)
2 VOs * 2 groups * 1 planner with 3 clusters (1 * (1+2+4) + 1 * (1+2+4+8) + 1 * (1+2+4+8)
CPUs) VO0
Jobs Statistics CPU Usage & Policy
VO1
More Simulations 6 VOs * 3 groups * 2 planners with
20 clusters (1 * (1+2+4) + ... CPUs)
Overall CPU Usage
Per Group CPU Usage
Simulation Variations / Dimensions
●Algorithms
●Technical solution for mappings
●Site level trust
●Centralized vs. decentralized
●Total information &/vs. inaccurate (stalled) information
Policy DB
Policy Translator
RM
ResourcesPolicy DB
Policy Translator
Site Selector
User
Group Queues
*** ???
Job SubmissionJob Selector
Technical Approach to Grid03
Future Work ●Mainly, Analysis on Several Dimensions
Glimpse into Policy Setup●Negotiation & Advance Resource Reservation
RM
S`-PEP
S-AP
V-RM
V`-PEP
V-AP
RM
S`-PEP
S-AP
SLA Document
SN-SLA
VN-SLASM-SLA
SM-SLA
User
SLAIniatitor
PolicyRules
SN-SLA
VMA-SLA
JobSubmission
Site B
Site A
VO
VNA-SLA
VM-SLA
Resources
Resources
Conclusions●Conclusions