Upload
ngonga
View
214
Download
0
Embed Size (px)
Citation preview
Developing and Validating a Practical Methodology
for Cloud Services Evaluation By Zheng Li, School of Computer Science, ANU & NICTA, [email protected]. Supervisors: Dr. Liam O’Brien, Dr. Rajiv Ranjan, and Dr. Shayne Flint.
1 Background and Motivation
Given the diversity of Cloud services and price models, service
utilisation or comparison would require deep understanding of
how the different candidate services may (or may not) match
particular demands. Thus, Cloud services evaluation is crucial
and beneficial for both service consumers (e.g. cost-benefit
analysis) and providers (e.g. direction of improvement).
According to our research, there is a lack of sound
methodologies to guide the practice of Cloud services
evaluation. Although each of the existing studies must have (at
least intuitively) followed a particular approach, the evaluation
approaches described in different reports vary, and some of
them may even have conceptual issues in evaluation
methodology. For example, some studies treated evaluation
methodology as experimental setup and/or preparation of
experimental environment only, and to the best of our
knowledge, none of the existing studies emphasized all the
evaluation steps and their corresponding strategies.
2 Solution and Contribution
We have developed and validated the generic and practical
Cloud Evaluation Experiment Methodology (CEEM) [2] for
evaluating Cloud services and Cloud-based applications.
CEEM emphasizes the evaluation procedure (rather than just
the evaluation results) through a set of steps that utilise our
developed artefacts and strategies.
We developed knowledge artefacts (Taxonomy, Metrics
Catalogue, Experimental Factor Framework, and Conceptual
Model) to make CEEM more Cloud-specific and more
practical. Guided by CEEM, evaluators can perform systematic
Cloud services evaluation following our rigorous procedure.
More importantly, by generating and reusing CEEM-based
evaluation templates, evaluations would be more repeatable,
and the evaluation results would be more reproducible and
comparable.
Requirement Recognition
A set of specific requirement questions.
Service Feature Identification
Identified service features to be evaluated.
Identified service features to be evaluated.
Metrics and Benchmarks Listing Experimental Factor Listing
A list of candidate experimental factors.
A set of lists of candidate metrics and benchmarks.
Metrics and Benchmarks Selection Experimental Factor Selection
The selected metrics and benchmarks.
The selected metrics and benchmarks. The selected experimental factors with
various combinations.
Experimental Design
Experimental blueprints, and experimental instructions.
Experimental Implementation Experimental Analysis
Experimental results.
Experimental results.
Experimental analysis results.
Experimental analysis results.
Conclusion and Documentation Evaluation report describing the whole logic of the evaluation implementation.
Documentation Templates Evaluation Templates
The recipient of the evaluation result who needs the report, or other evaluators who reuse the evaluation templates.
Lookup capability provided by a Taxonomy, a Metrics Catalogue, and
a Factor Framework.
“A methodology refers to an organised set of principles which guide action in trying to ‘manage’ (in the broad sense) real world problem situations.” [1]
Peter Checkland and Jim Scholes
[1] P. Checkland and J. Scholes, Soft Systems Methodology in Action. New York,
NY: John Wiley & Sons Ltd., September 1999.
[2] Z. Li, L. O’Brien, R. Ranjan, S. Flint, and H. Zhang, “On a practical methodology
for repeatable and reproducible Cloud services evaluation,” submitted to IEEE
Transactions on Emerging Topics in Computing.