Brett D. Higgins ^, Kyungmin Lee *, Jason Flinn *, T.J. Giuli +, Brian Noble *, and Christopher...

Preview:

Citation preview

Brett D. Higgins^, Kyungmin Lee*, Jason Flinn*, T.J. Giuli+, Brian Noble*, and Christopher Peplin+

Arbor Networks^ University of Michigan* Ford Motor Company+

The future is cloudy: Reflecting prediction error in mobile

applications

Mobile applications are adaptive

2Kyungmin Lee

How do applications adapt?

3Kyungmin Lee

Make predictions

Choose optimal strategy

Execute it!

How do applications adapt?

4Kyungmin Lee

Make predictions

Choose optimal strategy

Execute it!

How do applications adapt?

5Kyungmin Lee

Make predictions

Choose optimal strategy

Execute it!

How do applications adapt?

6Kyungmin Lee

Make predictions

Choose optimal strategy

Execute it!

CloneCloud ’11MAUI ’10Chroma ’07Spectra ‘02

How do applications adapt?

7Kyungmin Lee

Make predictions

Choose optimal strategy

Execute it!

What can possibly go wrong?

Predictions are not perfect

8Kyungmin Lee

Make predictions

Choose optimal strategy

Execute it!

Need to consider predictor errors!

Need to consider redundancy

9Kyungmin Lee

Make predictions

Choose optimal strategy

Execute it!

Re-evaluate the environment

10Kyungmin Lee

Make predictions

Choose optimal strategy

Execute it!

Needs to constantly re-evaluate the

environment

Embracing uncertainty

• Our library chooses the best strategy– Incorporates prediction errors– Single strategy or redundant– Balances cost & benefit of redundancy

• Benefit (time saved)• Cost (energy + cellular data)

– Re-evaluates the environment

11Kyungmin Lee

Outline

• Motivation• Uncertainty-aware decision-making methods

– Library overview– Our three methods– Re-evaluation from new information

• Evaluation• Conclusion

12Kyungmin Lee

Library overview

13Kyungmin Lee

Application provides

Our libraryprovides

Strategies

Predictors Predictors

Errordistribution

Environment reevaluation

Decision mechanism

14

90%

10%

Remote response time

1 sec100 sec

100%

Local response time

20 sec

Remote vs. Local

Kyungmin Lee

Localexpected time: 20 sec

Remoteexpected time: 10.9 sec

Uncertain server load

15

90%

10%

Remote response time

1 sec100 sec

Remote vs. Local

Kyungmin Lee

Remoteexpected time: 10.9 sec

Uncertain server load

100%

Local response time

20 sec

Localexpected time: 20 sec

16

90%

10%

Remote response time

1 sec100 sec

Let’s consider redundancy

Kyungmin Lee

Redundancyexpected time: 2.9 sec

Remoteexpected time: 10.9 sec

Uncertain server load

100%

Local response time

20 sec

Incorporating prediction errors

17Kyungmin Lee

• Use redundancy?– When predictions are too uncertain– Benefit (time) > Cost (energy + cellular data)

• Our library provides three methods– Brute force, error bounds, Bayesian estimation– Hides complexity from the application

Brute force

18Kyungmin Lee

• Compute error upon new measurement• Weighted sum over joint error distribution

– For redundant strategies:• Time: min across all strategies• Cost: sum across all strategies

• Simple, but computationally expensive

Error bounds

• Obtain bound for new measurement• Calculate bound on net gain of redundancy

max(benefit) – min(cost) = max(net gain)

9876543210

BP1 BP2

Band

wid

th (M

bps)

Network bandwidth

9876543210

T1 T2

Tim

e (s

econ

ds)

Time to send 10Mb

Max time savingsfrom redundancy

19Kyungmin Lee

Bayesian estimation

• Basic idea:– Given a prior belief about the world,– and some new evidence,– update our beliefs to account for the evidence,

• AKA obtaining posterior distribution

– using the likelihood of the evidence

• Via Bayes’ Theorem: posterior = likelihood * prior p(evidence) Normalization factor;

ensures posterior sums to 1

20Kyungmin Lee

Bayesian estimation

• Applied to decision making:– Prior: completion time measurements– Evidence: complet. time prediction + implied decision– Posterior: new belief about completion time– Likelihood:

• When local wins, how often has the prediction agreed?• When remote wins, how often has the prediction agreed?

• Via Bayes’ Theorem: posterior = likelihood * prior p(evidence)

21Kyungmin Lee

Normalization factor;ensures posterior sums to 1

22

90%

10%

Remote response time

1 sec100 sec

Reevaluation: conditional distributions

Kyungmin Lee

Expected time: 20sec Expected time: 10.9sec

Decision

Elapsed Time

Remote

0 11s 31s …. 100s

Uncertain server load

100%

Local response time

20 sec

Remote Remote & local

Outline

• Motivation• Uncertainty-aware decision-making methods

– Library overview– Our three methods– Re-evaluation from new information

• Evaluation• Conclusion

23Kyungmin Lee

24

Evaluation: methodology

• Network trace replay (walking & driving)– Speech recognition, network selection app

• Metric: weighted cost function– time + cenergy * energy + cdata * data

No-cost

Low-cost

Mid-cost

High-cost

cenergy 0 0.00001 0.0001 0.001

Battery life reduction under average use (normally 20 hours)

N/A 6 min 36 sec 3.6 sec

Kyungmin Lee

25

No cost High cost0

0.2

0.4

0.6

0.8

1

1.2

Local-onlyRemote-preferredAdaptiveOur library

Speech recognition, server loadW

eigh

ted

cost

(nor

m.)

Kyungmin Lee

Our library matches the best strategy

23%

Redundancy is less beneficial as cost increases

26

No cost High cost0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Cellular-onlyRemote-preferredAdaptiveOur library

Network selection, walking traceW

eigh

ted

cost

(nor

m.)

Kyungmin Lee

Our library matches the best strategy

24%

2x

27

Discussion

• Our library provides the best strategy• Which method is the best?

– Brute force: Accurate, but expensive– Error bounds: Leans toward redundancy– Bayesian: Mixed bag

• No clear winner

Kyungmin Lee

28

Conclusion

• Need to consider uncertainty in predictions• Redundancy is powerful!• Our library helps apps to choose best strategy• Source code at

– https://github.com/brettdh/instruments– https://github.com/brettdh/libcmm

Kyungmin Lee

29

Questions?

Kyungmin Lee

30Kyungmin Lee

31

Speech recognition, server loadW

eigh

ted

cost

(nor

m.)

Kyungmin Lee

No cost High cost0

0.2

0.4

0.6

0.8

1

1.2

Brute forceError boundsBayesian

Error boundsleans towardsredundancy

32

Network selection, walking traceW

eigh

ted

cost

(nor

m.)

No cost Low cost Mid cost High cost0

0.20.40.60.8

11.21.41.61.8

2

2x

24%

Low-resource strategies improve

Meatballs matches the best strategy

Error boundsleans towardsredundancy

Simple Our library

Kyungmin Lee

33

Speech recognition, server load

No cost Low cost Mid cost High cost0

0.2

0.4

0.6

0.8

1

1.2

1.4

23%

Meatballs matches the best strategy

Simple

Error boundsleans towardsredundancy

Wei

ghte

d co

st (n

orm

.)

Kyungmin Lee

Our library

34

Network selection, driving trace

No cost Low cost Mid cost High cost0

0.5

1

1.5

2

Not much benefitfrom using WiFi

Simple

Wei

ghte

d co

st (n

orm

.)

Kyungmin Lee

Our library

35

Speech recognition, walking trace

No cost Low cost Mid cost High cost0

0.2

0.4

0.6

0.8

1

1.2

1.4

Benefit of redundancy persists more

23-35%

>2x

Meatballs matches the best strategy

Simple

Wei

ghte

d co

st (n

orm

.)

Kyungmin Lee

Our library

Recommended