15
-Artificial Neural Network- Chapter 4 Adaline & Madaline 朝陽科技大學 資訊管理系 李麗華 教授

C04 Adaline

Embed Size (px)

Citation preview

-Artificial Neural Network-Chapter 4 Adaline & Madaline 2Outline ADALINE MADALINE Least-Square Learning Rule The proof of Least-SquareLearning Rule 3ADALINE (1/3) ADALINE: (Adaptive Linear Neuron) 1959 byBernard WidrowPEsingle processing element 4ADALINE (2/3) Method : The value in each unit must +1 or 1(perceptron 1 )net = i iW X< >=+ + + + = =0 net 1if0 net 1net 1 2 2 1 1 0 0YX W X W X W W X n ndifferent from perception's transfer function 5ADALINE (3/3)(T-Y) , T expected outputADALINE can solve only linear problem(the limitation)i ii i iW XW W W A = == + A 6MADALINE MADALINEIt is composed of many ADALINEMultilayer Adaline.Yno WijnetjWijnx1 xAfter the second layer, the majority vote is used.if more than half of netj 0, thenoutput 1,otherwise, output 1 7Least-Square Learning Rule (1/6)-L j1 ) . i.e ( , ) , , , ( 101 0 s s = = -ntnjxxxX x x x X) . i.e ( , ) , , , ( 101 0 = =ntnwwwWw w w W j :j input patternt :(transpose)L : input patternni0Netj= WtXj= wixi= w0x0+ w1x1 + + wnxnj 8Least-Square Learning Rule (2/6) By applying the least-square learning rule theweights is ,'RR: Correlation Matrixwhere P R W1' '2'1'1 - *P RW*=== +...+ + = = =LX TPtX X R R R RLRtj jLjtj j L= 9Least-Square Learning Rule (3/6)1 2 31 2 31 1 11 1 1Example X 1 X 0 X 10 1 1T T T = = =| | | | | | | | |= = = | | | | | |\ . \ . \ .-1 1 1 1 X31 1 0 1 X21 0 1 1 X1TjX3X2X1 10Least-Square Learning Rule (4/6) Sol. R( )( )( )||||.|

\||||.|

\|=)`|||.|

\|=|||.|

\|=|||.|

\|=|||.|

\|=|||.|

\|=|||.|

\|=323132313232323212 1 21 2 22 2 3311 1 11 1 11 1 11111111 0 10 0 01 0 11011010 0 00 1 10 1 1110011'3'2'1RRRR 11Least-Square Learning Rule (5/6)( ) ( )( ) ( )( ) ( ) ( )2 - W2 - W3 W0 2 20 2 21 2 2 30031323132313232323210 , 0 ,31100311 1 1 1 , 1 , 1 1101 1 , 0 , 1 1110 0 , 1 , 1 11213 2 13 2 13 2 1321*321===)`= + += + += + +|||||||.|

\|=||||.|

\|||||.|

\| = |.|

\|= =)` = == == =W W WW W WW W WWWWP W RPPPPtttt 12Least-Square Learning Rule (6/6) Verify the net:1,1,0net=3X1-2X2-2X3=1 Y=1 ok1,0,1net=3X1-2X2-2X3=1 Y=1 ok1,1,1net=3X1-2X2-2X3=-1 Y=-1 ok3ADALINE-2-2X1X2X3Ysolution best minimum is this ==> = > < 02k(*) 13Proof of Least Square Learning Rule(1/3) We use Least Mean Square Error to ensure the minimumtotal error. As long as the total error approaches zero, thebest solution is found. Therefore, we are looking for theminimum of . Proof:2k = == = == = = + > = < + =+ = = = > == = = = =) () )( ( ) ( ) ( :111 1 1 12 212 W X X W W X T TW X X W W X TLTW X X W X W TLTW X XLW Y TLTtk kttk k kLktk kttk k kLktk ktktk kLkLktktk k k) ( + ) ( > = = = =< === =2] ) (1[ 2) (2)] (1[2212121 12RProof of Least Square Learning Rule(2/3) 15=P RW P R W P RWWX T P RW RW X TRW W W X T TWRW W W X T TX X RX TR X X R* - ktk k k kttk k kkkttk k ktk kLktk kktk k k= = 2 2 ifP Letminimal is that such W Findof mean i.e. /L R' R LetR R R R R' LetMatrix. n Correlatio called also , matrix n n a is i.e., , Let* *'L K 2 1 0 02 2 2 2]' 2 [* 2R ) (1222221 =c ) c( ) ( = = + > < =+ ) ( ) ( =c ) c( > =< > =< ==> = -= + . + + . + + = - = -= Proof of Least Square Learning Rule(3/3)