84
1 Computer Vision Laboratory Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

  • View
    220

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

1

Computer Vision Laboratory

Hiperlearn: A Channel Learning Architecture

Gösta Granlund

Computer Vision LaboratoryLinköping University

SWEDEN

Page 2: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

2

Computer Vision Laboratory

Requirements on systems

• A large number of input and output variables

• A highly non-linear mapping, which in parts is locally continuous and in other parts transiential switching

• A mapping which requires the interpolation between a large number of models

Page 3: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

3

Computer Vision Laboratory

Challenge

• It would be of great benefit if complex procedures could be designed through learning of the complex and usually non-linear relationships.

Page 4: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

4

Computer Vision Laboratory

Architecture

• A learning system architecture using a channel information representation

• This representation implies that signals are monopolar and local

• Monopolar means that data only utilizes one polarity, allowing zero to represent no information

Page 5: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

5

Computer Vision Laboratory

Architecture II

• Locality derives from a partially overlapping mapping of signals into a higher-dimensional space

• The combination of monopolarity and locality leads to a sparse representation

• Locality of features allows the generation of highly non-linear functions using a linear mapping.

Page 6: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

6

Computer Vision Laboratory

The locality obtained in this channel representation gives two advantages:

• Nonlinear functions and combinations can be implemented using linear mappings

• Optimization in learning converges much faster

Page 7: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

7

Computer Vision Laboratory

Overview

Page 8: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

8

Computer Vision Laboratory

State mapping

Page 9: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

9

Computer Vision Laboratory

Mapping

Page 10: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

10

Computer Vision Laboratory

Page 11: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

11

Computer Vision Laboratory

Page 12: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

12

Computer Vision Laboratory

Page 13: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

13

Computer Vision Laboratory

Page 14: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

14

Computer Vision Laboratory

Mapping

Page 15: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

15

Computer Vision Laboratory

System operation modes

The channel learning architecture can be run under two different operation modes, providing output in two different representations:

• Position encoding for discrete event mapping

• Magnitude encoding for continuous function mapping

Page 16: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

16

Computer Vision Laboratory

Discrete event mapping

Page 17: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

17

Computer Vision Laboratory

Discrete event mapping

There are a number of characteristics of this mode:

• Mapping is made to sets of output channels, whose response functions may be partially overlapping to allow the reconstruction of a continuous variable.

• Output channels are assumed to assume some standard maximum value, say 1, but are expected to be zero most of the time, to allow a sparse representation.

Page 18: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

18

Computer Vision Laboratory

Discrete event mapping II

• The system state is not given by the magnitude of a single output channel, but is given by the relation between outputs of adjacent channels.

• Relatively few feature functions, or sometimes only a single feature function, are expected to map onto a particular output channel.

Page 19: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

19

Computer Vision Laboratory

Discrete event mapping III

• The channel representation of a signal allows a unified representation of signal value and of signal confidence, where the relation between channel values represents value, and the magnitude represents confidence.

• Since the discrete event mode implies that both the feature and response state vectors are in the channel representation, the confidence of the feature vector will be propagated to the response vector if the mapping is linear.

Page 20: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

20

Computer Vision Laboratory

The properties just listed, allows the structure to be implemented as a purely linear mapping:

Page 21: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

21

Computer Vision Laboratory

Continuous function mapping

Page 22: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

22

Computer Vision Laboratory

Continuous function mapping

There are a number of characteristics of this mode:

• It uses rather complete sets of feature functions, compared to the mapping onto a response mapping in discrete event mode.

• The structure can still handle local feature dropouts without adverse effects upon well behaved regions.

Page 23: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

23

Computer Vision Laboratory

Continuous function mapping II

• Mapping is made to continuous output response variables, which may have a magnitude which varies over a large range.

• A high degree of accuracy in the mapping can be obtained if the feature vector is normalized, as stated below.

Page 24: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

24

Computer Vision Laboratory

• In this mode, however, it is not possible to represent both a state value x, and a confidence measure r, except if it is done explicitly.

Page 25: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

25

Computer Vision Laboratory

Confidence measure

• For the confidence measure a linear model is assumed,

Page 26: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

26

Computer Vision Laboratory

• The model for this mode comes out as

Page 27: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

27

Computer Vision Laboratory

Overview of operation modes

• Although the two modes use a similar mapping, there are distinctive differences, as illustrated in table next slide

Page 28: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

28

Computer Vision Laboratory

Overview of operation modes

Page 29: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

29

Computer Vision Laboratory

Comparison between modes

• The major difference is in how they deal with zero, or more properly with no information and with interpolation.

• This depends upon whether the output is a channel vector or not.

• If it is, position encoding will be used with use of confidence.

• This should be viewed as the normal mode of operation for this architecture, as this forms the appropriate representation for continued processing at a next level.

Page 30: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

30

Computer Vision Laboratory

• A conversion back to scalars, is used mainly for easier interpretation of the states or for control of some linear actuator.

Page 31: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

31

Computer Vision Laboratory

Associative structure

Page 32: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

32

Computer Vision Laboratory

Training setup

Page 33: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

33

Computer Vision Laboratory

Batch mode training

Page 34: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

34

Computer Vision Laboratory

Page 35: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

35

Computer Vision Laboratory

Regularization properties

• An unrestricted least squares solution tends to give a full matrix with negative and positive coefficients. These coefficients do their best to push and pull the basis functions to minimize the error for the particular training set, a.k.a. overfitting.

Page 36: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

36

Computer Vision Laboratory

• In many situations a linkage matrix with a small norm is preferred, e.g. in Tikhonov regularization, because a small norm reduces the global noise propagation.

Page 37: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

37

Computer Vision Laboratory

• It is desirable that a large error, due to drop-outs, heavy noise etc., in one region of the state space should not affect the performance in other, sufficiently distant regions of the state space.

• This can be achieved using a non-negativity constraint together with locality.

Page 38: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

38

Computer Vision Laboratory

• While non-negativity does not give the smallest possible error on the training set, nor the smallest norm of the linkage matrix, it acts as a regularization.

• The non-negativity constraint will be referred to as monopolar regularization.

Page 39: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

39

Computer Vision Laboratory

• Monopolarity in addition gives a more sparse linkage matrix, allowing a faster processing.

Page 40: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

40

Computer Vision Laboratory

Tikhonov regularization

Page 41: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

41

Computer Vision Laboratory

Monopolar regularization

Page 42: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

42

Computer Vision Laboratory

Tikhonov versus monopolar

Page 43: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

43

Computer Vision Laboratory

Properties Tikhonov

• Tikhonov regularization tends to produce non-sparse, non-local, solutions which give a computationally more complex system.

• We can also observe ringing effects which typically occur for linear methods.

Page 44: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

44

Computer Vision Laboratory

Properties monopolar

• The non-negativity constraint on the other hand gives a solution with a slightly larger norm, but it is both sparse and local.

• Note that only two elements in cmono are non-zero.

Page 45: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

45

Computer Vision Laboratory

Loosely coupled subsystems

• The degree to which variables are shared among subsystems can be expressed as a state distance or coupling metric between subsystems.

• Subsystems at a larger distance according to this metric will not affect each other, e.g. in a process of optimization.

Page 46: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

46

Computer Vision Laboratory

Full versus sparse local matrix

Page 47: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

47

Computer Vision Laboratory

Rearranged sparse local matrix

Page 48: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

48

Computer Vision Laboratory

Locality of models

• The local behavior of the feature and response channels, together with the monopolar constraint on the linkage matrix, implies that the solution for one local region of the response space does not affect the solution in other, sufficiently distant, regions of the response space.

Page 49: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

49

Computer Vision Laboratory

• This property is also essential as we want to add new features after training the system, such as in the implementation of an incremental learning procedure.

• A new feature that is never active within the active domain of a response channel will not affect the previously learned links to that response channel.

Page 50: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

50

Computer Vision Laboratory

Loose coupling leads to fast convergence

• Another important advantage with loosely coupled subsystems structures is that the iterative procedure to compute the model parameters, in this case the linkage matrix, exhibits fast convergence.

Page 51: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

51

Computer Vision Laboratory

Optimization

• C is computed as the solution to the constrained weighted least-squares problem

• where

Page 52: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

52

Computer Vision Laboratory

Weight matrix

• The weight matrix W, which controls the relevance of each sample, is chosen

• The minimization problem does not generally have a unique solution, as it can be under-determined or over-determined.

Page 53: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

53

Computer Vision Laboratory

Normalization

• This can be rewritten as

• We can interpret Df and Df as normalizations in the sample and feature domain respectively.

• We will refer to these as sample domain normalization and feature domain normalization

Page 54: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

54

Computer Vision Laboratory

Solution for linkage matrix

• The solution is derived from the projected Landweber method

Page 55: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

55

Computer Vision Laboratory

Associative structure

Page 56: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

56

Computer Vision Laboratory

Separability of solution

• Note that the structure implies independent optimization of each response, i.e. the optimization is performed locally in the response domain.

• Hence we may, as an equivalent alternative, optimize each linkage vector separately as

Page 57: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

57

Computer Vision Laboratory

Removal of small coefficients in C

• Coefficients of matrix C which are below a certain threshold Cmin shall be eliminated altogether, as this gives a more sparse matrix.

• The thresholding is implemented repeatedly during the iterative procedure.

• The increase of error on new data turns out to be minimal, as the iterative optimization will compensate for this removal of coefficients.

Page 58: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

58

Computer Vision Laboratory

Coefficients of C

• Furthermore, the coefficients of matrix C shall be limited in magnitude, as this gives a more robust system

• This can be achieved by using a constraint C<Cmin

• The assumption is that significant feature values, ah shall be within some standard magnitude range, and consequently the value of non-zero ah shall be above some minimum value amin

Page 59: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

59

Computer Vision Laboratory

Normalization modes

• The normalization can be viewed as a way to control the gain in the feedback system loop, which the iterative optimization procedure implies,

• This normalization can be put in either of the two representation domains, the sample domain or the feature domain, but with different effects upon convergence,accuracy, etc.

• For each choice of sample domain normalization Ds there are non-unique choices of feature domain normalizations Df

Page 60: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

60

Computer Vision Laboratory

Normalization modes

• The choice of normalization depends on the operation mode, i.e. continuous function mapping or discrete event mapping.

• There are some choices that are of particular interest.

Page 61: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

61

Computer Vision Laboratory

Associative structure

Page 62: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

62

Computer Vision Laboratory

Discrete event mapping

• Discrete event mode corresponds to a sample domain normalization matrix

• This gives

Page 63: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

63

Computer Vision Laboratory

Continuous function mapping

• One approach is to assume that all training samples have the same confidence, i.e s=1, and compute and

• such that

Page 64: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

64

Computer Vision Laboratory

• We can use the sum of the feature channels as a measure of confidence:

Page 65: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

65

Computer Vision Laboratory

Mixed domain normalization

• The corresponding sample domain normalization matrix is

Page 66: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

66

Computer Vision Laboratory

Mixed domain normalization

• And the feature domain normalization matrix becomes

Page 67: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

67

Computer Vision Laboratory

• It can also be argued that a feature element which is frequently active should have a higher confidence than a feature element which is rarely active.

• This can be included in the confidence measure by using a weighted sum of the features, where the weight is proportional to the mean in the sample domain:

Page 68: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

68

Computer Vision Laboratory

Sample domain normalization

• This corresponds to the sample domain normalization matrix

• which gives

Page 69: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

69

Computer Vision Laboratory

Sample domain normalization

• This corresponds to the sample domain normalization matrix

• which gives

Page 70: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

70

Computer Vision Laboratory

Summary of normalization modes

Page 71: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

71

Computer Vision Laboratory

Sensitivity analysis for continuous function mode

• A response state estimate is generated from a feature vector a according to the model

• Regardless of choice of normalization vector wT the response will be independent of any global scaling of the features, i.e.

Page 72: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

72

Computer Vision Laboratory

• If multiplicative noise is applied, represented by a diagonal matrix , we get

Page 73: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

73

Computer Vision Laboratory

Experimental verification

• A generalization of the common CMU twin spiral pattern has been used, as this is often used to evaluate classification networks.

• We have chosen to make the pattern more difficult in order to show that the proposed learning machinery can represent both continuous function mappings (regression) and mappings to discrete classes (classification).

Page 74: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

74

Computer Vision Laboratory

Test pattern

Page 75: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

75

Computer Vision Laboratory

Pattern selected to demonstrate the following properties:

• The ability to approximate piecewise continuous surfaces.

• The ability to describe discontinuities (i.e. assignment into discrete classes).

• The transition between interpolation and representation of a discontinuity.

• The inherent approximation introduced by the sensor channels.

Page 76: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

76

Computer Vision Laboratory

Noise immunity test

• The robustness is analyzed with respect to three types of noise on the feature vector: – Additive noise– Multiplicative noise– Impulse noise

Page 77: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

77

Computer Vision Laboratory

Experimental setup

• In the experiments, a three dimensional state space is used.

• The sensor space and the response space are orthogonal projections of the state space.

Page 78: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

78

Computer Vision Laboratory

Test image function

• Note that this mapping can be seen as a surface of points with

• The analytic expression for is:

Page 79: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

79

Computer Vision Laboratory

Associative network variants

1. Mixed domain normalization bipolar network

2. Mixed domain normalization monopolar network

3. Sample domain normalization monopolar network

4. Uniform sample confidence monopolar network

5. Discrete event mode monopolar network

Page 80: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

80

Computer Vision Laboratory

Performance of bipolar network (#1) under varied number of samples. Top left to bottom right: N=63,

125, 250, 500, 1000, 2000, 4000, 8000

Page 81: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

81

Computer Vision Laboratory

Page 82: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

82

Computer Vision Laboratory

Performance of discrete event network (#5) under varied number of samples. Top left to bottom right:

N=63, 125, 250, 500, 1000, 2000, 4000, 8000

Page 83: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

83

Computer Vision Laboratory

Performance of discrete event network (\#5) under varied number of channels. Top left to bottom right: K=3 to K=14

Page 84: Computer Vision Laboratory 1 Hiperlearn: A Channel Learning Architecture Gösta Granlund Computer Vision Laboratory Linköping University SWEDEN

84

Computer Vision Laboratory

Number of non-zero coefficients under varied number of channels.