Upload
others
View
11
Download
0
Embed Size (px)
Citation preview
Advanced topics in spectral
graph theory
Edwin Hancock
Department of Computer Science
University of York
Supported by a Royal Society
Wolfson Research Merit Award
Links explored in this talk
• What more can we learn about the structure and
complexity of networks from their graph spectra?
• Structure: Relationships between graph spectra
and structure of networks. How random walks
can be used as probes of structure.
• Complexity: How can we use graph spectra to
compute index of network complexity.
Machine learning perspective
• Learn about structure of networks.
• Entropy from node degree statistics.
• Relationships between graph spectra and tree or cycle
frequencies, and entropy and degree statistics.
• Kernels for measuring network similarity and graph
embedding.
• Description length principles for learning distributions
determining structure of graphs – “generative models”.
Structure
Graph spectra and
random walks
Use spectrum of Laplacian matrix
to compute hitting and commute
times for random walk on a graph
Laplacian Matrix
• Weighted adjacency matrix
• Degree matrix
• Laplacian matrix
otherwise
EvuvuwvuW
0
),(),(),(
Vv
vuWuuD ),(),(
WDL
Laplacian spectrum
• Spectral Decomposition of Laplacian
• Element-wise
)()(),( vuvuL kk
k
k
T
k
k
kk
TL
),....,( ||1 Vdiag )|.....|( ||1 V
||21 ....0 V
Properties of the Laplacian
• Eigenvalues are positive and smallest eigenvalue is zero
• Multiplicity of zero eigenvalue is number connected components of graph.
• Zero eigenvalue is associated with all-ones vector.
• Eigenvector associated with the second smallest eigenvector is Fiedler vector.
||21 .....0 V
Continuous time random walk
Heat Kernels
• Solution of heat equation and measures
information flow across edges of graph
with time:
• Solution found by exponentiating
Laplacian eigensystem
tt Lht
h
TT
kk
k
kt tth ]exp[]exp[
TWDL
Heat kernel and random walk
• State vector of continuous time random
walk satisfies the differential equation
• Solution
tt Lp
t
p
00]exp[ phpLtp tt
Example.
Graph shows spanning tree of heat-kernel. Here weights of graph are
elements of heat kernel. As t increases, then spanning tree evolves from
a tree rooted near centre of graph to a string (with ligatures).
Low t behaviour dominated by Laplacian, high t behaviour dominated by
Fiedler-vector.
Moments of the heat-kernel
trace
….can we characterise graph by
the shape of its heat-kernel trace
function?
Heat Kernel Trace
Time (t)->
Trace
]exp[][ thTri
it
Shape of heat-kernel
distinguishes
graphs…can we
characterise its shape
using moments
Heat Kernel Trace
Time (t)->
Trace
]exp[][ thTri
it
Shape of heat-kernel
distinguishes
graphs…can we
characterise its shape
using moments
dtthTrts s )]([)(0
1
Use moments of heat kernel
trace:
Rosenberg Zeta function
• Definition of zeta function
s
k
k
s
)()(0
Heat-kernel moments
• Mellin transform
• Trace and number of connected components
• Zeta function
dttts
i
ss
i ]exp[)(
1
0
1
dttts s ]exp[)(0
1
]exp[][0
tChTri
it
dtChTrts
s t
ss
i
i
][)(
1)(
0
1
0
C is multiplicity of zero
eigenvalue or number of
connected components in
graph.
Zeta-function is related
to moments of heat-
kernel trace.
Zeta-function behavior
Objects
72 views of each object taken in 5 degree intervals as camera moves
in circle around object.
Feature points extracted using corner detector.
Construct Voronoi tesselation image plane using corner points as
seeds.
Delaunay graph is region adjacency graph for Voronoi regions.
Heat kernel moments
(zeta(s), s=1,2,3,4)
PCA using zeta(s), s=1,2,3,4)
Zeta function derivative
• Zeta function in terms of natural exponential
• Derivative
• Derivative at origin
]lnexp[)()(00
kk
k
s
k ss
]lnexp[ln)('0
k
kk ss
0
0
1lnln)0('
k
k k
k
Meaning
• Number of spanning trees in graph
)](exp[)( ' od
d
G
Vu
u
Vu
u
COIL
Deeper probes of structure
Ihara zeta function
Zeta functions
• Used in number theory to characterise
distribution of prime numbers.
• Can be extended to graphs by replacing
notion of prime number with that of a
prime cycle.
Ihara Zeta function
• Determined by distribution of prime cycles.
• Transform graph to oriented line graph (OLG) with edges as nodes and edges indicating incidence at a common vertex.
• Zeta function is reciprocal of characteristic polynomial for OLG adjacency matrix.
• Coefficients of polynomial determined by eigenvalues of OLG adjacency matrix.
• Powers of OLG adjacency matrix give prime cycle frequencies.
Oriented Line Graph
Ihara Zeta Function
• Ihara Zeta Function for a graph G(V,E)
– Defined over prime cycles of graph
– Rational expression in terms of characteristic polynomial of oriented line-graph
A is adjacency matrix of line digraph
Q =D-I (degree matrix minus identity
Characteristic Polynomials from IZF
• Perron-Frobenius operator is the adjacency matrix TH
of the oriented line graph
• Determinant Expression of IZF
– Each coefficient,i.e. Ihara coefficient, can be derived from the
elementary symmetric polynomials of the eigenvalue set
• Pattern Vector in terms of
Analysis of determinant
• From matrix logs
• Tr[T^k] is symmetric polynomial of
eigenvalues of T
]][exp[]det[
1)(
1 k
sTTr
TsIs
k
k
k
N
N
N
N
TTr
TTr
TTr
.............][
.....
...][
........][
21
2
21
2
1
2
1
1
Distribution of prime cycles
• Frequency distribution for cycles of length l
• Cycle frequencies
l
l
lsNsds
ds )(ln
][)(ln)!1(
10
l
sl
l
l TTrsds
d
lN
Experiments: Edge-weighted Graphs
Feature Distance
& Edit Distance
Three Classes of Randomly
Generated Graphs
Experiments: Hypergraphs
Heat Kernel Signature
• The heat kernel signature has been used to
characterise 3D shapes
• Sun et al (2009)
• Sample self-heat of vertices over time
• Closely related to heat kernel trace
• Characterised individual vertices
• Overall signature from histogram of all HKS
]),,(),,(),,([HKS210
xxHxxHxxH ttt
Quantum Walks
Quantum Walks on Graphs
• Have both discrete time (DQW) and continuous time (CQW) variants.
• Use qubits to represent state of walk.
• State-vector is composed of complex numbers rather than an probabilities. Governed by unitary matrices rather than stochastic matrices.
• Admit interference of different feasible walks.
• Reversible, non ergodic, no limiting distribution.
• Sensitive to symmetry structure of graph (leads to faster hitting and commute times).
Quantum and Classical Walks on Graphs
– Contrast continuous-time quantum walk and classical random
walk on a graph
• Continuous-time quantum walk
– A) state space: set of vertices
– B) the state vector is complex-valued
– C) the evolution is governed by a time-varying unitary matrix
• Classical random walk – A) state space: set of vertices
– B) the state vector is real-valued
– C) the evolution is governed by a double stochastic matrix
– Continuous-time quantum walk is reversible and non-ergodic, and does not have a limiting distribution.
Continuous time walk
• Classical
• Quantum
)()( tLptpdt
d
tt iLdt
d ||
Continuous time walk
• Classical (heat eqution)
• Quantum (complex wave equation)
)()( tLptpdt
d
tt iLdt
d ||
0|]exp[| iLt
)0(]exp[)( pLttp
Walks compared
Classical walk Quantum walk
Quantum vs Classical
• Hitting time behavior of quantum walks is completely different from
its classical counterpart (see Fig. quantum vs classical on a line)
5 10 15 20 25 30 35 40 45 500
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
time
pro
ba
bili
ty
Quantum Walk
Random Walk
Why interesting
• Classical walk characterised by probabillity
state-vector.
• In quantum case characterised by a complex
wave function.
• In quantum case interference and entanglement
lead to interesting phenomenon, not present in
classical case.
Relations to Symmetry
• Symmetrical substructures
result in destructive and
constructive interference
patterns
• Interference leads to
cancellation of backtracking
paths and faster hitting
times
Uses in ML/PR
• Interference effects endow QW algorithms with properties not
exhibited by classical walks - quantum weirdness.
• Lift co-spectralities – better distinguish trees and strongly regular
graphs – (Emms etc al PR 2009).
• Quantum commute time – shows long range symmetry sensitivity
(Emms, QIC 2010).
• Quantum information theory of walk leads to symmetry detection
(Rossi et al Phys Rev E, 2013) and new family of symmetry
sensitive graph kernels.
Embeddings of cojoined
trees: Classical and
quantum commute times. In
case of quantum commute
time, symmetrically placed
nodes embed to same
location is subspace.
Hitting time
Q(u,v): Expected number of steps
of a random walk before node v is
visited commencing from node u.
Commute time
CT(u,v): Expected time taken for a
random walk to travel from node u
to node v and then return again
CT(u,v)=Q(u,v)+Q(v,u)
Idea: compute edge
attribute that is robust to
modifications in edge-
structure.
Commute time: averages over all
paths connecting a pair of nodes.
Effects of edge deletion reduced.
Green’s function
• Spectral representation
• Meaning: psuedo inverse of Laplacian
)()(),(||
2
1 vuvuG ii
V
i
i
Commute Time
• Commute time
– Hitting time and the Green’s function
– Commute time and Laplacian eigen-spectrum
),(),(),( uuGd
volvvG
d
volvuQ
uv
),(),(),(),(),(),(),( uvGd
volvuG
d
volvvG
d
voluuG
d
voluvQvuQvuCT
uuvu
||
2
2)()(
1),(
V
i
ii
i
vuvolvuCT
Commute Time Embedding
• Embedding that preserves commute time has
co-ordinate matrix (vectors of co-ordinates are
columns):
Tvol
Example embedding
Embedding of a wheel with 4 spokes
Complexity
Information theory, graphs and
kernels.
Protein-Protein Interaction Networks
Characterising graphs
• Topological: e.g. average degree, degree distribution, edge-density, diameter, cycle frequencies etc.
• Spectral or algebraic: use eigenvalues of adjacency matrix or Laplacian, or equivalently the co-efficients of characteristic polynomial.
• Complexity: use information theoretic measures of structure (e.g. Shannon entropy).
Complexity characterisation
• Information theory: entropy measures
• Structural pattern recognition: graph
spectral indices of structure and topology.
• Complex systems: measures of centrality,
separation, searchability.
Information theory
• Entropic measures of complexity:
Shannon , Erdos-Renyi, Von-Neumann.
• Description length: fitting of models to
data, entropy (model cost) tensioned
against log-likelihood (goodness of fit).
• Kernels: Use entropy to computeJensen-
Shannon divergence
Entropy
• Thermodynamics: measure of disorder in a system. Change
in entropy with energy measure temperature of system
DE=TDH.
• Statistical mechanics: Entropy is measure of uncertainty of
microstates of a system H=-k Sipi ln pi – Boltzmann.
• Quantum mechanics: Confusion of states H=-kTr[r ln r ] in
terms of density matrix for states – Von Neumann.
• Information theory: Shannon information H=- Sipi ln pi – in
terms of probability of transmission of a message in an
information channel.
Von Neumann entropy
• System with Hamiltonian (energy)
eigenvalues are li i=1,..N.
• VN entropy S=-Sili ln li
• Quadratic approximation -li ln li =li (1- li)
• Passerini and Severini – Hamiltonian is
normalised Laplacian L=D-1/2(D-A) D-1/2
Von-Neumann Entropy
• Derived from normalised Laplacian
spectrum
• Comes from quantum mechanics and is
entropy associated with density matrix.
2
ˆln
2
ˆ||
1
i
V
i
iVNH
TDADDL ˆˆˆ)(ˆ 2/12/1
Approximation
• Quadratic entropy
• In terms of matrix traces
||
1
2||
1
||
1
ˆ4
1ˆ2
1
2
ˆ1
2
ˆ V
i
i
V
i
ii
V
i
iVNH
]ˆ[4
1]ˆ[
2
1 2LTrLTrHVN
Computing Traces
• Normalised Laplacian
• Normalised Laplacian squared
||]ˆ[ VLTr
Evu vudd
VLTr),(
2
4
1||]ˆ[
Simplified entropy
Evu vu
VNdd
VH),( 4
1||
4
1
Collect terms together, von Neumann
entropy reduces to
Homogeneity index (Estrada)
Evuvuvu
Evu
vu
ddddVVG
ddG
),(
2
),(
2/12/1
211
1||2||
1)(
)()(
Based on degree
statistics
Homogeneity meaning
Evu
vuAvuCTG),(
),(2),(~)(
Limit of large degree
Largest when commute time differs from 2
due to large number of alternative
connecting paths.
Extend to directed graphs
• Use directed Laplacian
• Find approximations for strongly and
weakly directed graphs.
Directed Laplacian
Transition matrix left eigenvector components
Noramised Laplacian
Trace calculations
Von Neumann Entropy
• Partition edge set into union of
undirectional edges (one way only) E1 and
bidirectional edges (both ways) E2.
• E=E1U E2
Directed Graphs
Eji Eji
out
j
out
i
out
i
in
j
in
i
dddd
d
VVH
),( ),(22
2
1
)(2
111
Von Neumann entropy comes from in-degree
and out-degree of vertices connected by edges
Development comes from Laplacian of a directed graph (Chung).
Eji Ejiout
j
out
i
in
j
out
i
out
i
in
i
ddddd
d
VVH
),( ),(2
2
11
2
111
Strongly Directed Graphs
Eji
in
j
out
i ddVVH
),(2
1
2
111
Most of edges are unidirectional, few
bidirectional edges (|E1|>>|E2|)
Weakly Directed Graphs
Ejiin
j
out
i
out
j
in
j
out
i
in
i
dd
dddd
VVH
),(2
//
2
111
Most of edges are bidirectional, few
unidirectional edges (|E1|<<|E2|)
Development comes from Laplacian of a directed graph (Chung).
Links to assortivity
• Weakly directed: proportional to in/out degree
ratio of nodes; inversely proportional to product
of out degree of start node and in degree of end
node (degree flow).
• Strongly directed: inversely proportional to
product of out degree of start node and in
degree of end node.
Evolving graphs
Distinguishing different graphs
Financial Market Data
• Look at time series correlation for set of
leading stocks.
• Create undirected or directed links on
basis of time series correlation.
• Directed von
Neumann entropy as
stock market
network evolves.
• Troughs represent
financial crises,
while the deepest
one corresponds to
Black Monday, 1987. Black
Monday
Friday the 13th
mini-crash
Bankruptcy of
Lehman
Brothers
1997 Asian
financial crisis
• Directed von
Neumann entropy
change during Black
Monday, 1987.
• Entropy witnesses a
sharp drop on Black
Monday and
recovers in a few
trading days’ time.
Black
Monday
• Directed von
Neumann entropy
change during 1997
Asian financial
crisis.
• Entropy decreases
significantly and
does not recover
until after a
relatively long time.
1997 Asian
financial crisis
• Directed von
Neumann entropy
change during
Bankruptcy of
Lehman Brothers.
• Entropy fluctuates
significantly and
cannot recover in a
long time.
Bankruptcy of
Lehman
Brothers
• Comparison of
directed and
undirected von
Neumann entropies as
stock market network
evolves.
• Directed entropy is
more sensible to
slight changes in
network structure than
undirected entropy.
• Directed and
undirected von
Neumann entropies
change during Friday
the 13th mini-crash.
• Directed entropy has a
better performance in
catching minor
changes in the
network structure than
its undirected
analogue.
Friday the 13th
mini-crash
• Connected component
number as stock market
network evolves.
• Network becomes
fragmented during
financial crises.
• Von Neumann entropy is
able to catch such
structure changes.
• Node degree assortativity
coefficients as stock
market network evolves.
• Top-left: out-degree & out-
degree; top-right: in-degree
& in-degree; bottom-left:
out-degree & in-degree;
bottom-right: in-degree &
out-degree.
• The smallest value appears
on Black Monday, 1987.
• Directed/undirected von
Neumann entropies and
node degree assortativity
coefficients as stock
market network evolves.
• The indegree-outdegree
assortativity behaves
slightly different from the
other three.
• Both directed and
undirected entropies are
able to detect the changes.
• Histograms of added
edges in degree space
during financial crisis
(Black Monday).
• Most edges have higher
probability to appear
between two nodes with
low degrees.
• Histograms of removed
edges in degree space
during financial crisis
(Black Monday).
• Edges that connect two
nodes with low degrees
have higher probability
to disappear.
• Histograms of
unchanged edges in
degree space during
financial crisis (Black
Monday).
• Edges that connect two
nodes with higher
degrees are more likely
to remain unchanged.
• Histograms of added/removed/unchanged edges on different von Neumann entropy
components during financial crisis (Black Monday).
• Added/removed edges have the highest probability to happen either when the first
component is equal to one or when the other components have small values.
• The probability of an edge remain unchanged becomes highest when the first
component is equal to one and does not depend on other two components.
Graph Kernels
Kernel Functions
• Graphs are structures and do not reside in a Euclidean or
even a metric space.
– Can we embed them in such spaces?
– Based on their similarities or features characterising their internal
structure.
• What about this data: If we do so, can we be sure the
embedding is useful and will
allow us to easily separate
different classes of graph.
Here data is not separable with a
linear boundary.
Kernel Functions
• Need a transformation for the data which render them
separable.
• Try this non-linear transformation
• Only two of the three dimensions (√2xy ignored)
• Now there is a linear subspace which separates the two classes
2
2
2
y
xy
x
y
x
2
2
2
y
xy
x
y
x
x
y
x2
y2
Kernel Functions
• Consider two points a and b in the original feature space from the previous example
• They transform as follows
• What is the dot-product in the new space?
• Let K(ua,ub)=(ua·ub)2 = va·vb
• K(ua,ub) is an example of a kernel function
– Equal to the dot-product in the new space
– A function of the positions in the original space
2
2
2222
)(
2
ba
baba
babbaababa
yyxx
yyyxyxxx
uu
vv
2
2
2
a
aa
a
a
a
a
a
y
yx
x
y
xvu
2
2
2
b
bb
b
b
b
b
b
y
yx
x
y
xvu
Kernel Embedding
• The kernel function tells us everything because we can find the points in the new space just using the values of the kernel function
• Let Xu be the datamatrix for the original points and Xv be the corresponding datamatrix for the new transformed points,with data as columns
• Form the kernel matrix of the new points (note similarity to kernel matrix in MDS)
• An element of K is given by
• So given the original datamatrix and the kernel function, we can find the kernel matrix of the new points
v
T
v XXK
),( jijiij KK uuvv
Kernel Embedding
• The rest follows exactly our analysis in MDS
• Hence given a kernel function and the original points, we
can find the transformed points
• This is called the kernel embedding
T
v
T
T
vv
T
UX
UU
UUK
XX
21
21
21
Properties of a kernel function
Not every function is a kernel function
Must be a function of original positions, and equal to a dot-product
in some transformed space
Gives conditions:
First two come directly from fact K(.,.) is a dot-product
Final one enables K to embed exactly into new space (kernel
embedding)
criterion Mercer
inequalitySchwarz -CauchyKKK
SymmetryKK
s)eigenvalue positiveor (zero definite-semi positive bemust
),(),(),(
),(),(
2
K
yyxxyx
xyyx
Kernelising an algorithm
• Many methods can be kernelised; this means they should
depend on values of the kernel function rather than point
positions
• Need to re-write to use elements of kernel matrix K
Observation:
– Any direction of interest with respect to our data must lie within
the space filled by the data
• There is no data variation in any other direction
• This is called the span of the points
• Implies any interesting vector can be written
• We can try to find a instead of u
– This often enables us to kernalise an algorithm
aXxxxu nnaaa 2211
Graph Kernels
• Random Walk Kernel (Gartner et al 2003)
– Count the number of matching walks between two graphs
– A is the product graph of G1 and G2
– k is the walk length
– The number of walks becomes very large
– The random walk graph kernel suffers from the problem of tottering
– Reduces expressive power and masks structural differences
Vji k
ij
k
k AGGK),( 0
21 ),(
Backtrackless Random Walk
• A random walk of length k is a sequence of vertices
– Such that
• A backtrackless random walk has the additional condition
– A sequence of oriented edges, excluding backtracking step
121 ,,, kuuu Euue iii ),( 1
1 ii ee
Backtrackless Random Walk Kernel
• The backtrackless random walk kernel is
• Defined on the product graph of the OLGs
• By eliminating the reverse edges in the OLG, we eliminate
backtracking
• Complexity is a problem
– Efficient method to compute given in (Aziz, Wilson, Hancock,
IEEE TNNLS 2013)
Vji k
ij
k
k AGGK),( 0
21 ),(
221121
212121
212121
),(),(
))},(),,{((
}),{(
EvvEuu
vvuuGGE
VVvvGGV
Jensen-Shannon Kernel
• Defined in terms of J-S divergence
• Properties: extensive, positive.
• Don’t need to compare walks (Bai Lu,
Hancock, JMIV 2013).
)()()(),(
),(2ln),(
jijiji
jijiJS
GHGHGGHGGJS
GGJSGGK
Computation
• Construct direct product graph for each
graph pair.
• Compute von-Neumann entropy difference
between product graph and two graphs
individually.
• Construct kernel matrix over all pairs.
Financial Data
Financial Data
Financial Data
Financial Data
Financial Data
Learning
Generative Models
• Structural domain: define probability distribution over
prototype structure. Prototype together with parameters
of distribution minimise description length (Torsello and
Hancock, PAMI 2007) .
• Spectral domain: embed nodes of graphs into vector-
space using spectral decomposition. Construct point
distribution model over embedded positions of nodes
(Bai, Wilson and Hancock, CVIU 2009).
Deep learning
• Deep belief networks (Hinton 2006, Bengio 2007).
• Compositional networks (Amit+Geman 1999, Fergus
2010).
• Markov models (Leonardis 200
• Stochastic image grammars (Zhu, Mumford, Yuille)
• Taxonomy/category learning (Todorovic+Ahuja, 2006-
2008).
Aim
• Combine spectral and structural methods.
• Use description length criterion.
• Apply to graphs rather than trees.
Prior work
• IJCV 2007 (Torsello, Robles-Kelly, Hancock) –shape classes from edit distance using pairwise clustering.
• PAMI 06 and Pattern Recognition 05 (Wilson, Luo and Hancock) – graph clustering using spectral features and polynomials.
• PAMI 07 (Torsello and Hancock) – generative model for variations in tree structure using description length.
• CVIU09 (Xiao, Wilson and Hancock) – generative model from heat-kernel embedding of graphs.
Structural learning
Using description length
Description length
• Wallace+Freeman: minimum message
length.
• Rissanen: minimum description length.
Use log-posterior probability to locate model that is
optimal with respect to code-length.
Similarities/differences
• MDL: selection of model is aim; model
parameters are simply a means to this
end. Parameters usually maximum
likelihood. Prior on parameters is flat.
• MML: Recovery of model parameters is
central. Parameter prior may be more
complex.
Coding scheme
• Usually assumed to follow an exponential
distribution.
• Alternatives are universal codes and predictive
codes.
• MML has two part codes (model+parameters). In
MDL the codes may be one or two-part.
Method
• Model is supergraph (i.e. Graph prototypes) formed by graph union.
• Sample data observation model: Bernoulli distribution over nodes and edges.
• Mode: complexity: Von-Neumann entropy of supergraphs.
• Fitting criterion:
MDL-like-make ML estimates of the Bernoulli parameters
MML-like: two-part code for data-model fit + supergraph complexity.
Model overview
• Description length criterion
code-length=negative + model code-length
log-likelihood (entropy)
Data-set: set of graphs G
Model: G parameters of network model, likelihood P p(g|G)
Updates by expectation maximisation:
Network parameters (M-step)
+ correspondence indicators (E-step).
)()|(),( HGLLGL
Generative model
• Train on graphs with set of predetermined
characteristics.
• Sample using Monte-Carlo.
• Reproduces characteristics of training set,
e.g. Spectral gap, node degree
distribution, etc.
Erdos Renyi
Barabasi Albert (scale free)
Dealunay Graphs
Experiments---generate new samples
Some ongoing work
• Using quantum walks
probe graphs in different ways to classical walks,
responding to symmetric or distant structure more
rapidly.
• Use quantum Jensen-Shannon divergence
directly model similarity of walks using density matrices,
and develop new family of kernels for quantum walks.
• Wavepackets
solve wave equation on graph using edge-based
Laplacian. Similulate wavepackets to obtain new
signature.
Conclusions
• Shown how graph spectra can be used as
characterisations of both structure and
complexity.
• Presented MDL framework which uses
complexity characterisation to learn generative
model of graph structure.
• Future: Deeper measures of structure
(symmetry) and detailed dynamics of network
evolution.