Upload
filip-de
View
213
Download
0
Embed Size (px)
Citation preview
REPORT
A Report on the First IFIP/IEEE Workshop on Qualityof Experience Centric Management (QCMan 2013)
Steven Latre • Antonio Liotta • Filip De Turck
Received: 28 October 2013 / Revised: 7 January 2014 / Accepted: 15 January 2014 /
Published online: 19 January 2014
� Springer Science+Business Media New York 2014
Abstract The first IFIP/IEEE international workshop on quality of experience
(QoE) centric management (QCMan 2013) was held on May 31, 2013 in Ghent,
Belgium. This report summarizes the keynotes, presentations and discussions in
QCMan 2013 and provides a high-level view of ideas, challenges, strategies and the
current state of the research activities in the field of QoE management.
Keywords Quality of experience � Network management � Service
management � Quality of experience centric management � Subjective
quality assessment � Video coding
1 Introduction
The first IFIP/IEEE international workshop on quality of experience (QoE) centric
management (QCMan 2013) was organized as a forum to combine two neighbour-
ing research fields: quality of experience optimization and network and service
management. In recent years, the Internet has evolved from a pure packet forwarder
to a provider of complex and high demanding services and applications (e.g., video,
S. Latre (&)
Department of Mathematics and Computer Science, University of Antwerp - iMinds,
Middelheimlaan 1, 2020 Antwerp, Belgium
e-mail: [email protected]
A. Liotta
Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
e-mail: [email protected]
F. D. Turck
Department of Information Technology, Ghent University - iMinds, Gaston Crommenlaan 8/201,
9050 Ghent, Belgium
e-mail: [email protected]
123
J Netw Syst Manage (2014) 22:280–288
DOI 10.1007/s10922-014-9301-0
voice, on-line gaming, cloud applications). These services and applications are
typically managed through a set of Quality of Services parameters (e.g. packet loss,
delay, jitter). However, it is widely agreed that the management of these services
and applications should be centred around their quality as perceived by the end user:
referred to as the QoE [1]. This QoE centric management is greatly challenged in
today’s Internet by (1) the stringent QoE requirements of the supported services and
applications (e.g., timing constraints, loss intolerance) and users (e.g., unpredict-
ability of user behaviour, request for high quality services), (2) the plethora of
service consumption possibilities (e.g. for video: live vs on-demand, managed vs
over-the-top), (3) the inherent complexity of services and applications which can be
offered to users to reach the appropriate QoE level and (4) the difficulty in assessing
the quality as perceived by the end user also due to insufficient insight in the
psychological and sociological factors of the service and application consumption.
QCMan 2013 aimed at providing an international forum for researchers addressing
these challenges.
The workshop took place on May 31, 2013, in Het Pand in Ghent, Belgium.
QCMan 2013 was held in conjunction with the 13th IFIP/IEEE International
Symposium on Integrated Network Management (IM 2013). The workshop was
sponsored by the IEEE Communications Society (ComSoc) and the International
Federation for Information Processing (IFIP) and was supported by the Technical
University Eindhoven and Ghent University-iMinds. The workshop was endorsed
by the Technical Committee on Network Operations and Management (CNOM).
The first edition of QCMAN attracted 22 submissions. These submissions
underwent a rigorous review process, with at least three reviews per paper.
Based on these reviews, 11 full papers and three short papers were accepted for
publication for publication. This shows that there is an important community that
is dedicated to managing networks and services from a Quality of Experience
perspective. The workshop was opened with an interesting keynote discussing
the current state of the art in high QoE web video streaming and future
challenges in cognitive multimedia delivery. The accepted QCMan 2013 full
papers were clustered in four thematic sessions, while the QCMan short papers
were discussed in a dedicated short paper session, featuring short presentations
and a lot of room for discussions. Based on the review comments, scores, quality
of the presentation and answering of the questions, a best paper award was given
at the closing of the workshop.
QCMan stands out as being the only workshop that focuses both on management
aspects as well as QoE-based research. As such, it is a unique event to discuss the
optimization of networks and services, with the ultimate goal of increasing the QoE.
QoE centric management is an inter-disciplinary research field. It requires (1) video
and application experts, which propose novel QoE oriented coding schemes,
(2) networking experts, which propose service aware networking solutions and
(3) social scientists, which provide an insight on the subjective nature of QoE
through user studies. QCMan aimed at bringing together researchers from these
various fields. Therefore, QCMan is the primary forum for researchers looking to
address the aforementioned challenges of QoE centric management.
J Netw Syst Manage (2014) 22:280–288 281
123
2 Keynote Address
After the opening of the workshop by the organizers, a keynote was presented by
Werner Van Leekwijck who is a senior research manager in Bell Labs in Antwerp,
Belgium, and is responsible for future multimedia delivery architectures. He started
his keynote by stressing the importance of QoE in designing the next generation
multimedia delivery solutions. For the users, the QoE is the only thing he is worried
about. It is thus not something which can easily be ignored.
The title of Werner’s keynote, Towards Cognitive Multimedia Delivery: High
QoE Web Video Streaming, referred to two distinct parts in his presentation. On the
one hand, Werner discussed the state of the art in achieving the highest QoE in
streaming video over the web. He also discussed Alcatel-Lucent’s main break-
throughs in this area. On the other hand, he presented his team’s ideas towards a
new multimedia delivery platform, which is far more cognitive oriented.
In discussing the state of the art of web video streaming, he emphasized the
importance of adaptation and more specifically HTTP adaptive streaming (HAS).
This emerging technology is becoming the de facto standard in streaming Over-
The-Top video services and is gaining increased attention both in industry and
academia. This could also be witnessed through the QCMan 2013 submissions, of
which four out of 11 full papers focused on HAS. In HAS, video is encoded in
multiple quality levels and temporally segmented in chunks of 2–10 s. A quality
selection algorithm, positioned at the client, is responsible for dynamically selecting
the most suitable quality level based on the network and device performance.
Therefore, HAS allows to adapt the video bitrate dynamically during playback.
Werner argued that the HAS deployment should not be limited to the best effort
Internet, but that HAS is applicable to managed IPTV settings as well. Typical use
cases in this context are streaming of the live broadcast TV signal and the IPTV
Video on Demand catalogue to handheld devices such as tablets.
In order to make HAS suitable for managed web video streaming as well, several
optimizations to the original HAS protocol are currently being proposed. Werner
presented a few of these optimizations in his keynote. First, the multimedia delivery
system should have a good notion of the QoE. To address this, he discussed the
details of a HAS session reconstruction technique and corresponding HAS metric,
which allows to estimate the QoE of HAS sessions intermediary in the network.
Second, there are several possibilities for improvement of current HAS deploy-
ments: caching on segmented video, layered video coding and network-aware
solutions. Third, the network itself can also be the subject of improvement. The
network today was not designed with multimedia in mind. To tackle this, Werner
presented a deadline aware transport protocol called Shared Content Addressing
Protocol (SCAP), which was specifically designed to cope with the timing
constraints of video delivery.
In the second part of his keynote, Werner went beyond adaptation and introduced
the concept of a cognitive multimedia network. He argued that this is one of the
main future challenges in QoE centric management. He first elaborated on the
definition of cognition: while adaptation focuses on implementing a perception and
action cycle, cognition goes a step beyond and allows to learn from past experience
282 J Netw Syst Manage (2014) 22:280–288
123
as well. This means that, in contrast to an adaptive multimedia network, a cognitive
multimedia network should not only focus on reducing operational expenditures
(OPEX) but also on improving the performance over time. Similar to how software-
defined radios have evolved to cognitive radios, he argued that adaptive multimedia
networks will evolve towards cognitive multimedia networks. However, in order to
do this, there are still many important research challenges to be tackled in terms of
stability and robustness, which should be addressed in future research projects.
3 Technical Paper Sessions
The 11 QCMan 2013 full papers were clustered in four thematic sessions, namely
(1) adaptive QoE management, (2) QoE assessment, (3) HTTP adaptive streaming
and (4) subjective QoE studies. In between the third and fourth full paper session, a
short paper session was held, discussing work in progress and new ideas, in order to
spark a lot discussions. In the following subsections, we provide a brief overview of
the contributions of the papers in each of the sessions.
3.1 Session on Adaptive QoE Management
Mohamed Adel (Waterford Institute of Technology, Ireland) kicked off the QCMan
technical sessions by presenting his paper ‘‘A Generic Algorithm for Mid-call Audio
Codec Switching’’. In this paper, the authors focused on QoE optimization in a
Voice over IP (VoIP) environment. Different VoIP codecs can have different
performance and different packet loss toleration. The authors compared the
performance of different codecs and identified different theoretical switching points.
However, switching on these points is not always possible in practice because of
overhead in terms of switch over gaps (i.e., the response time when switching
between codecs). They present and evaluate an adaptive algorithm that performs in-
call selection of the most appropriate audio codec given prevailing conditions on the
network path between the endpoints of a voice call. Existing QoE audio metrics are
used to evaluate the solution. The results show that the adaptive algorithm is able to
match the highest possible MOS score, as it selects the best performing audio codec.
In the second paper in this session Bert Vankeirsbilck (Ghent University
- iMinds, Belgium) presented his paper titled ‘‘Quality of experience driven control
of interactive media stream parameters’’. This paper investigates cloud gaming,
which provides the entire game experience to the users remotely from a server.
Because the game data are streamed from the server to the user, these data are prone
to packet loss and therefore QoE degradations. The authors propose a control
algorithm, which adjusts video coding parameters such as frame rate and QP, based
on two models: a QoE model and a compression model. The QoE model, is defined
using subjective testing, and defines how a user will react to changes in the encoding
parameters. The compression model defines the compression gain that can be
achieved by modifying the parameters. During deployment, both models are joined
to find the optimal video quality parameters. The presented algorithm can
J Netw Syst Manage (2014) 22:280–288 283
123
dynamically change the parameters to optimize the QoE by trading off visual
quality against frame rate as a function of the available bandwidth.
3.2 Session on QoE Assessment
The second session investigated how the subjective notion of QoE can be assessed
and formulated in objective quality metrics. The first paper in this session focused
on QoE assessment of HAS. Danny De Vriendt (Alcatel-Lucent Bell Labs,
Belgium) presented his paper ‘‘Model for estimating QoE of Video delivered using
HTTP Adaptive Streaming, which is in essence an objective video quality metric for
HAS. However, fundamental in their approach, is that they wanted to derive this
metric based on Quality of Service (QoS) parameters only and thus avoid
performing Deep Packet Inspection. The metric proposes a weighted combination of
HAS-based QoS parameters such as chosen quality level, number of switches, etc.
The authors discussed how they performed a set of lab tests first to identify relevant
profiles of parameter configurations, which were then included in a larger scale
subjective test. Based on the output of this subjective test, they were able to find
well performing weights for their quality metric, which correlated with the results of
the subjective test, with a Root Mean Square Error (RMSE) of 0.3.
Glenn Van Wallendael (Ghent University - iMinds, Belgium) presented the
second paper in this session entitled ‘‘Evaluation of Full-Reference Objective Video
Quality Metrics on High Efficiency Video Coding’’. In this paper, the authors
investigated what will be the impact on quality as video coding technology is
shifting from the traditional H.264 coding standard to the new High Efficiency
Video Coding (HEVC) standard, which was completed in January 2013. They
evaluated the performance of several existing video quality metrics by assessing
how they relate to the actual quality. This actual quality was measured using a single
stimulus subjective test. The authors used both H.264 and HEVC videos and
investigated if there are notable differences between them. The general observation
of their work is that the more advanced and fine-tuned a metric becomes, the more
caution is advized to the applicability of the metric to new coding standards.
Furthermore, while the performance of the different metrics might vary if we move
from H.264 to HEVC, the relative ordering between metrics remains the same.
In the third paper, entitled ‘‘Neurophysiological Experimental Facility for
Quality of Experience (QoE) Assessment’’, Khalil ur Rehman Laghari (INRS,
University of Quebec, Canada) presented an interesting survey of available
neurophysical facilities, which can be used for QoE assessment. The authors discuss
several neurophysiological methods such as Electroencephalography (EEG), which
measures electrical activity along the human scalp, and Near-infrared spectroscopy
(NIRS), which measures the blood conveyance. These neurophysical facilities were
compared to other QoE assessment facilities such as systems that track the
peripheral autonomic nervous system (e.g., skin conductance, heart rate variation)
and eye tracking. The authors argue that the advantage of EEG and NIRS is their
faster reaction speed compared to other assessment techniques.
To conclude this session, Thomas Zinner (University of Wuerzburg, Germany)
presented his paper ‘‘Video Quality Monitoring based on Precomputed Frame
284 J Netw Syst Manage (2014) 22:280–288
123
Distortions’’. In this paper, the authors tried to find a way to enable video quality
monitoring in the network, without relying to costly Deep Packet Inspection
techniques. To do this, they pre-computed the video quality and how it would
evolve for given network problems. These pre-computed values can then in turn be
used to make an estimation of the QoE if a similar network problem occurs. Their
approach was evaluated in a scenario consisting of a content provider, service
provider, intermediary network and set of users. In their approach, they precompute
the Structural SIMilarity index (SSIM) and their impact on distortion of each frame.
Next, they monitor the number of lost frames per Group of Pictures (GOP). If one or
more frames is lost, the pre-computed values are mapped to a single video quality
value.
3.3 Session on HTTP Adaptive Streaming
The third technical session focused specifically on the upcoming HAS technology.
Jan Lievens (Free University Brussels, Belgium) presented his paper ‘‘Optimized
Segmentation of H.264/AVC Video for HTTP Adaptive Streaming’’, where he
investigates the coding cost of segmentation in HAS-based H.264/AVC coded
videos. Traditionally, all segments are equally sized and equally coded. While this
might be beneficial for the network as this introduces some level of predictable
behaviour, it is sub-optimal from a coding perspective. The authors show that the
HAS traditional segmentation strategy can lead to up to 10 % coding overhead. The
paper therefore questions the need for equal sized segments and presents an
optimized segmentation framework.
Thomas Zinner (University of Wuerzburg, Germany) also presented a paper on
HAS, entitled ‘‘Implementation and User-centric Comparison of a Novel Adaptation
Logic for DASH with SVC’’. In this paper, the authors compare the performance of
different client algorithms and how they perform when using SVC-based videos.
Based on this study, a new SVC-based client selection algorithm is proposed which
has the advantage that it does not rely on bandwidth measurements and does not
assume constant bit rate. Instead, the algorithm selects the ratio of which layers to
download based on the bitrates of the layers, which is part of the HAS manifest file.
The proposed client selection algorithm was evaluated through a proof-of-concept
implementation using the new HAS-based Dynamic Adaptive Streaming over
HTTP (DASH) standard. The evaluations show that the client is able to match the
playback quality of the other client algorithms, while reducing the switching
frequency with a factor 10.
3.4 Short Paper Session
QCMan 2013 also featured three short papers, containing important new contri-
butions, work in progress and thought provoking ideas. In this session, the length of
the presentation was shorted to include more time for discussion. The first paper in
this session was entitled ‘‘Improving performance of H.264/AVC transmissions over
vehicular networks’’ and was presented by Ismael Rozas-Ramallal (Universidade da
Coruna, Spain). In this paper, the performance of H.264 is investigated when
J Netw Syst Manage (2014) 22:280–288 285
123
streaming video over IEEE 802.11p networks, which are designed for vehicular
communication. The authors present two strategies for optimising video streaming:
the substitution of convolutional codes used in IEEE 802.11p with Low-Density
Parity Check codes and adapting the transmission power by taking into account the
picture type of the video. Both strategies use information from the video content
itself to optimize the performance.
Jin Li (Ghent University - iMinds, Belgium) presented the paper ‘‘Sampling in
Transform Domain for Improved QoE of 3D Frame Compatible Video Coding’’. In
this paper, sampling in the transform domain is examined for the application of 3D
frame-compatible video formats. Since the high frequency coefficients tend to be
removed in the encoding process, the proposed subsampling is performed in the
transform domain using the same transform structure as H.264/AVC. Therefore, the
information removed by the sub-sampling also has a very high probability to be
dropped by the quantization process. In this way, the information lost by sub-
sampling is minimized. In addition, the evaluation criterion is also discussed by
taking into account the impacts of both the sampling and the coding process.
The last short paper was presented by Juan Pedro Lopez Velasco (Universidad
Politecnica de Madrid, Spain), presenting the paper ‘‘No-Reference Algorithms for
Video Quality Assessment based on Artifact Evaluation in MPEG-2 and H.264
Encoding Standards’’. He discussed the need for no-reference (NR) video quality
metrics, where no information about the original video is given. NR metrics are
mainly used in on-line scenarios where sending the original video (or summarized
information) is infeasible because of the overhead. Instead, an NR algorithm must
estimate ‘‘what the human eye sees’’. The authors argue that the efficiency of
detecting artifacts in NR metrics is still based on metrics from older video coding
standards such as MPEG-2. According to the authors, considerable improvements
can be made if NR metrics also take into account the more recent coding advances
such as deblocking filters and variability of coefficients.
3.5 Session on Subjective QoE Studies
As Quality of Experience is inherently subjective, a major research challenge is
understanding this subjective nature of users perception. The last QCMan 2013
session therefore focused on the setup of large-scale subjective QoE studies. In the
first paper, entitled ‘‘Design of a large-scale subjective test in the cinema’’, Katriina
Kilpi (Free University Brussels - iMinds, Belgium) reported on the design of a
subjective quality test on speckle perception of a new laser projector, which was
conducted in a movie theatre. Speckle is an annoying kind of glitter, which is
typically observed in cinemas and caused by mutual interference of a set of
wavefronts of different lasers (e.g., as applied in laser based display systems). The
main aim was to test the speckle perception and acceptance of attendees, while
consuming natural cinema content in an actual cinema setting. A subjective study of
187 participants was organized in an actual cinema setting. The conclusions of the
work were that speckle is often not noticeable: 88.6 % of the respondents did not
note any difference in picture quality.
286 J Netw Syst Manage (2014) 22:280–288
123
Jos Luis Tornos (University of Zaragoza, Spain) presented the paper ‘‘An
eVoting platform for QoE evaluation’’. This paper discusses how electronic voting
systems can be used for collecting QoE information and evaluating trends in the
users opinions. Traditionally, information gathering and polling on the Internet is
usually done through surveys and forms. The authors argue that these approaches
have however several downsides such as the possibility of duplicate submissions
and security concerns in the anonymity of the submissions. To avoid these
downsides, the authors propose to use eVoting systems for QoE studies as well.
They present an implementation of a secure eVoting system. Its immediate
application is the substitution of present voting systems or to carry out the
information gathering in a secure and reliable way in marketing polls.
Pedro Casas (Telecommunications Research Center Vienna, Austria) presented
the last paper of the QCMan 2013 workshop. His paper, entitled ‘‘Quality of
Experience in Remote Virtual Desktop Services’’ investigates the impact on QoE of
the paradigm of mobile cloud computing. The authors discuss how a panel of 52
users experienced the quality of a remote virtual desktop service in a lab test. The
work is part of a larger project where the impact on QoE is evaluated for multiple
services (video conferencing, gaming, mobile cloud, etc). The experiments
particularly characterize the influence of varying network conditions, which is
often the case in WAN environments. To evaluate the QoE, the authors considered
different interaction techniques such as typing, scrolling, drag & drop, etc, which
correspond to the typical operations that users perform in a remote virtual desktop
service. The results show that, even under optimal network QoS conditions, the
additional application response delay introduced by a remove virtual desktop
service leads to a degradation which can be as high as 0.5 MOS.
4 Best Paper Award
During the closing of the workshop, a best paper award was given in recognition to
the authors. Before the workshop, a shortlist was constructed of award candidates,
based on the review scores by the expert reviewers. During the workshop, a small
committee then evaluated the presentations and the answering of the questions.
Based on these criteria, a best paper was then selected. The QCMan 2013 best paper
award went to the paper ‘‘Implementation and User-centric Comparison of a Novel
Adaptation Logic for DASH with SVC’’ presented by Thomas Zinner (University of
Wuerzburg, Germany). The paper is co-uathored by Christian Sieber (University of
Wuerzburg, Germany), Tobias Hossfeld (Universitt Wrzburg, Germany), Christian
Timmerer (Klagenfurt University, Austria) and Phuoc Tran-Gia (University of
Wurzburg, Germany). The contributions of this paper were described in Sect. 3.3
5 Concluding Remarks
As organizers, we consider this first edition of the QCMan workshop to be a success.
We were very happy to attract 22 high quality submissions. Moreover, with over
J Netw Syst Manage (2014) 22:280–288 287
123
30 registrations, the workshop was very well attended. The inter-disciplinary nature
of the workshop was clearly much appreciated by many attendees as it triggered
many fruitful discussions between researches from different areas. Therefore,
QCMan 2013 was characterized by a high level of interactivity amongst the
participants. We would like to thank Werner Van Leekwijck for his thought
provoking keynote, the IM 2013 workshop chairs, for making sure that the
organization of the workshop went as smoothly as it did, as well as all the authors
for submitting and presenting their work. Finally, we would like to thank all QCMan
2013 participants for their attendance and very active participation to the discussion.
All QCMan 2013 full papers and short papers are also published in IEEE Xplore and
the IFIP Digital Library. Moreover, all program information of the workshop
(including slides of many presentations) is available online at http://www.qcman.
org. The next QCMan workshop is planned to be organized again in collocation
with IEEE/IFIP NOMS 2014 in Krakow, Poland. More information can be found on
the QCMan website: http://www.qcman.org.
References
1. Atzori, L., Chen, C.W., Dagiuklas, T., Wu, H.R.: QoE management in emerging multimedia services.
IEEE Commun. Mag. 50(4), 18–19 (2012)
Steven Latre is an assistant professor at the University of Antwerp, Belgium and the Future Internet
Department at iMinds. He received a Master of Science degree in computer science from Ghent
University, Belgium and a Ph.D. in Computer Science Engineering from the same university. His research
activity focuses on autonomous management and control of both networking and computing applications.
His recent work has focused on Quality of Experience optimization and management, distributed control
and network virtualization
Antonio Liotta holds the Chair of Communication Network Protocols at the Eindhoven University of
Technology (The Netherlands), where he leads the Autonomic Networks team since 2008. Antonio is a
Fellow of the UK Higher Education Academy and serves the Peer Review College of the UK Engineering
and Physical Sciences Research Council. During the last decade, he has investigated topical issues in the
area of computer and multimedia networking and is currently studying cognitive systems in the context of
optical, wireless and sensor networks. He is the author of the book Networks for Pervasive Services: six
ways to upgrade the Internet
Filip De Turck is a full-time professor affiliated with the Department of Information Technology of the
Ghent University and the Future Internet Department of iMinds, Belgium. Filip De Turck is author or co-
author of approximately 330 papers published in international journals or in the proceedings of
international conferences. His main research interests include scalable software architectures for
telecommunication network and service management, performance optimization and design of new
telecommunication services
288 J Netw Syst Manage (2014) 22:280–288
123