Upload
virgil-little
View
218
Download
0
Embed Size (px)
Citation preview
Performance of Distributed Performance of Distributed Constraint Optimization Constraint Optimization
AlgorithmsAlgorithms
A.A. Gershman, T. Grinshpon, Gershman, T. Grinshpon,
A. Meisels and R. ZivanA. Meisels and R. ZivanDept. of Computer ScienceDept. of Computer Science
Ben-Gurion UniversityBen-Gurion University
DCR workshop - May 2008 2
Optimization Optimization ProblemsProblems
Problems are too tightNo solution that satisfies all constraints existsSearch for thesolution with a minimal cost
DCR workshop - May 2008 3
Constraint Optimization Constraint Optimization ProblemsProblems
Weighted Binary CSPs: Every pair of assignments [<Xi,vi>,<Xj,vj>], is assigned a cost c
The cost of a tupple is the sum of all costs of pairs included in it
Specific case – Max-CSP: all costs c are either 0 or 1 [Larrosa & Meseguer 96],[Larrosa & Schiex 2004]
DCR workshop - May 2008 4
Distributed ConstraintDistributed Constraint Optimization Problems Optimization Problems (DisCOPs)(DisCOPs)
There are several approaches for solving DisCOPs:
Branch and BoundSynchBB, AFB
Using a Pseudo-TreeADOPT, DPOP
Merging partial solutionsOptAPO
Very different algorithms different behavior
Comparative evaluation of runtime performance is needed
DCR workshop - May 2008 5
AFBAFB
Add text
DCR workshop - May 2008 6
ADOPTADOPT
Add text
DCR workshop - May 2008 7
DPOPDPOP
Add text
DCR workshop - May 2008 8
OptAPOOptAPO
Add text
DCR workshop - May 2008 9
A bit of history… DisCSPsA bit of history… DisCSPs
The runtime performance of ABT [Yokoo et al. 1995] was first compared to AWC [Yokoo et al. 1998]:
Performance was measured in cycles of a synchronous simulator
DCR workshop - May 2008 10
Runtime Performance of DisCSPsRuntime Performance of DisCSPs (II)(II)
In 2001 we have played with a synchronous algorithm that uses ordering heuristics and have found that it is faster than ABT – but how to measure ?In 2002 a non-concurrent runtime measure for DisCSP search – non-concurrent constraints checks (NCCCs)Synchronous CBJ (with variable ordering) was shown to be faster than ABT (NCCCs) [Brito & Meseguer 2004]
All on randomly generated DisCSPs
DCR workshop - May 2008 11
Runtime Performance of Centralized Runtime Performance of Centralized COPsCOPs
Certain search algorithms produce a phase transition for increasingly tighter (and harder) random problems
Performance of standard Branch and Bound grows exponentially for ever harder problems.
MaxCSPs are harder than Weighted CSPs
DCR workshop - May 2008 12
Evaluation of Runtime Performance –Evaluation of Runtime Performance – ADOPT* ADOPT*
*[Modi et al. 2005]
DCR workshop - May 2008 13
Evaluation of Runtime Performance –Evaluation of Runtime Performance – OptAPO*OptAPO*
*[Mailler & Lesser 2004]
DCR workshop - May 2008 14
What is a What is a cyclecycle??
All quoted results are in cyclescyclesWhat is a cycle for each of the algorithms:
Adopt, AFB – reading all messages and checking all local assignments against the current context or CPADPOP – Calculating all costs of the sub tree for every combination of assignments of higher priority constrained agentsOptAPO, Solving centrally a problem of the size of the mediation session
DCR workshop - May 2008 15
How to count How to count NCCCsNCCCs for DisCOPs ? for DisCOPs ?
ADOPT, SBB and AFB perform CCs in each computation session and can be counted non-concurrently as for DisCSPsDPOP – for every row in the table sent by a DPOP agent, the number of CCs is the product of number of potential assignments times the number of constrained (up-tree) agents OptAPO - each mediation session is assigned the number of CCs needed to find the local solution
DCR workshop - May 2008 16
Choosing the right benchmark for DisCOPsChoosing the right benchmark for DisCOPs
Graph coloring problems do not cover important ranges of problem difficulty.Specific problems have special structures (MSP – equality binary constraints, Sensor Nets – very small density…) Evaluation – use random DisMaxCSPs and increase problem’s difficulty (tightness)
One way to exhibit a “phase transition”
DCR workshop - May 2008 17
Experimental Set-upExperimental Set-up
Randomly generated Max-CSPsSize:
10 variables10 values
Density: p1 = 0.4, 0.7
Tightness: p2 = 0.4 – 0.99
Rune time Measure:Non-Concurrent Constraint Checkes (NCCCs)
DCR workshop - May 2008 18
Logarithmic scale Logarithmic scale (p1 = 0.4)(p1 = 0.4)
DCR workshop - May 2008 19
Linear scale Linear scale (p1 = 0.4)(p1 = 0.4)
DCR workshop - May 2008 20
Low to intermediate difficultyLow to intermediate difficulty
ADOPT performs well
AFB, OptAPO, and SynchBB are fairly close
DPOP performs extremely poorProbably due to the lack of pruning of the search space
DCR workshop - May 2008 21
As difficulty grows…As difficulty grows…
The runtime of ADOPT, OptAPO, and SynchBB grows at a high exponential rate
ADOPT did not terminate its run on the tightest problems (p2 ≥ 0.9)
DPOP and AFB perform far better, by two orders of magnitude
DCR workshop - May 2008 22
Linear scale – A Closer LookLinear scale – A Closer Look
DCR workshop - May 2008 23
High Constraints Density High Constraints Density (p(p11 = 0.7) = 0.7)
DCR workshop - May 2008 24
Linear scale Linear scale (p(p11 = 0.7) = 0.7)
DCR workshop - May 2008 25
High Density and Low TightnessHigh Density and Low Tightness
• The performance of ADOPT, AFB, OptAPO, and SynchBB is fairly similar
• The performance of DPOP is much worse
DCR workshop - May 2008 26
High Density and TightnessHigh Density and Tightness
The performance of ADOPT, OptAPO, and SynchBB deteriorates exponentially
The algorithms did not terminate their run on the tightest problems (p2 > 0.9)
ADOPT is the worst, since it failed to terminate its run at a lower tightness value (0.7)
DPOP’s runtime is high, but it always terminated and is independent of tightnessAFB is clearly the best performing algorithm
Outperforms DPOP by orders of magnitude
DCR workshop - May 2008 27
Analysis of ADOPTAnalysis of ADOPT
• ADOPT is unable to solve hard problems • High tightness generates an excess of
(greedy) context switches along with an exponential increase in the number of messages sent
• Each agent in ADOPT sends out two messages following every single message it receives
• This causes the runtime of ADOPT to increase at a very high exponential rate
DCR workshop - May 2008 28
Analysis of DPOPAnalysis of DPOP
• DPOP does not perform search or pruning
• Computes the same size of matrices regardless of the tightness
• A change in the problem’s tightness would only affect the content of the matrices
DCR workshop - May 2008 29
Analysis of AFBAnalysis of AFB
The performance of AFB exhibits a ”phase-transition”AFB’s runtime increases as the problem’s difficulty (tightness) increases, and then suddenly drops by an order of magnitude at very high values of p2
This is very similar to COPs – deeper lookahead leads to much improved performance [Gershman et al. 2006] [Larrosa & Schiex 2004]
DCR workshop - May 2008 30
ConclusionsConclusions
ADOPT appears to be affected the most by the increase of the problem’s tightnessOptAPO performs up to three times better than SynchBBDPOP’s performance is independent of the problem’s tightnessAFB performs well on the whole range of problem difficulty. It is unique in its phase transition.Probably due to its use of pruning techniques through asynchronous forward-bounding