Desire Gettingstarted

  • View
    218

  • Download
    0

Embed Size (px)

Text of Desire Gettingstarted

  • 8/3/2019 Desire Gettingstarted

    1/30

    David Scuse September, 20111

    DESIRE NEURAL NETWORK SYSTEM

    Introduction:

    The Desire system is a neural network modelling tool that allows the user to build models ofneural networks. Using the system, almost any neural network model can be developed,including those that require differential equations. This is in contrast to conventional neuralnetwork tools that contain a variety of pre-defined models; the user can modify a modelsparameters but can not define new models (i.e. the system is a black box). With the Desiresystem, the neural network is defined using statements in the Desire programming language sothe user can see how each portion of the neural network performs its processing and, if necessary,can modify the processing. The Desire language is a high-level language so the user does not getlost in the details (as happens with neural network toolkits written in C/C++).

    Running the DesireW (Windows) System

    The Desire system (currently version 15.0) runs under Windows and Linux. (The followinginstructions refer to the Windows version.) To begin, double-click the command fileDesireW.bat to launch the Desire system. This batch file opens a Desire editor window and theDesire command window.

    Figure 1: The Desire Editor Window and the Desire Command Window

  • 8/3/2019 Desire Gettingstarted

    2/30

    David Scuse September, 20112

    Load a file into the Editor window either by dragging and dropping it onto the editor windowor by using the editors Open command). You may store your files in the same folder as theDesire system or in any other convenient folder. Do not use Windows to associate Desire sourcefiles (.src and .lst) with the Desire Editor doing so causes the OK button in the Editorwindow to stop functioning correctly.

    Figure 2: Loading A Desire Source File

  • 8/3/2019 Desire Gettingstarted

    3/30

    David Scuse September, 20113

    Transfer the file to Desire by clicking on the OK button in the editor window (or using the

    shortcut alt-T O). Then type the command erun (or zz) into the Desire command window.This command causes the program to be executed.

    Figure 3: Transferring A Desire Source File to the Desire Command Window

  • 8/3/2019 Desire Gettingstarted

    4/30

  • 8/3/2019 Desire Gettingstarted

    5/30

    David Scuse September, 20115

    Figure 5: Graph Window Menu Items

    Similarly, the standard Windows facilities for copying and pasting the contents of the Desire

    command window are available by clicking the top left icon of the window.

    Figure 6: Command Window Menu Items

    By modifying the Properties of the command window, you can switch from the default white texton a black background to black text on a white background.

    Source Files

    Normally, Desire source files have the extension .src. Files with the .src extension do not

    contain line numbers while numbered source files have the extension .lst. If a source filealready contains line numbers, the line numbers can be removed by using the Desire commandwindow command keep 'xxx'to create an unnumbered version of the file on disk (the file willbe named xxx.src). Internally, Desire still keeps a file numbered so that it can generate errormessages and so that the programmer can refer to individual lines.

    In general, unnumbered source files are preferable to numbered source files in Desire.

  • 8/3/2019 Desire Gettingstarted

    6/30

    David Scuse September, 20116

    To view the line numbers used in the Desire command window for the current source file, typethe Desire command list+. This causes the file to be listed with line numbers in the commandwindow. To list a specific line, type list+ 2200 (or, for a range of lines, type list+ 2200-2300) in the Desire command window. To save a copy of the current file with line numbers, typelist+ 'filename.lst' This causes a copy of the file to be saved in the Desire directory (which

    is not necessarily the same directory as the original source file). If you want to save the .lstfilein a different directory, add the complete path to the file in the command; for example, list+'D:\MyFolder\filename.lst'

    A Quick Tour: Learning the OR Patterns

    The following sections briefly describe a simple single-layer, feedforward Desire neural network.The SLFF1 program in Figure 7 defines a neural network that attempts to learn the logical OR

    patterns ({0,0} {0}, {1,1} {1}, etc.). The SLFF1 program defines a 2-1 network ofthreshold units and uses the delta training rule to learn the patterns. The output of the SLFF1program is shown in the Graph Window in Figure 8. Each of the generated symbols ( )

    represents the value of the tss (the sum of squares of the error) at a particular point in time. Thescreen is organized in a graph format: the vertical axis represents the tss and ranges from -scale (represented by -) at the bottom to +scale (represented by +) at the top. The value of scaleis defined in the program and is displayed at the bottom of the screen. In this example, thevertical axis ranges from -4 at the bottom to +4 at the top. The horizontal axis represents thetime, t, and begins at 1 on the left and ranges to the total number of training iterations on theright. The number of iterations of the program is the number of training epochs times thenumber of patterns. A training epoch is the time period during which each pattern is presentedonce to the network. The number of training epochs is defined in the variable Nepochs. In theprogram below, the number of program iterations is 100 (25 * 4). The horizontal axis on thegraph displays the values of time, t, beginning at 1 and continuing for 100 iterations.

    ------------------------------------------------------ Single-Layer Feedforward Network (SLFF1)-- activation function: threshold-- learning rule: delta learning--------------------------------------------------Npat=4 | Ninp=2 | Nout=1ARRAY Layer1[Ninp],Layer2[Nout],Weights[Nout,Ninp]ARRAY Target[Nout],Error[Nout]ARRAY INPUT[Npat,Ninp],TARGET[Npat,Nout]---------------------------------------------------- Define the OR training patterns--------------------------------------------------data 0,0;1,0;0,1;1,1 | read INPUTdata 0;1;1;1 | read TARGET---------------------------------------------------- Initialize Run-Time Variables

    --------------------------------------------------scale=4 | display R | display N-16 | display W 362,0Lrate=0.2 | Nepochs=25---------------------------------------------------- Learn the weights--------------------------------------------------tss=0t=1 | NN=Nepochs*Npat | TMAX=NN-1drunSTOP--------------------------------------------------DYNAMIC

  • 8/3/2019 Desire Gettingstarted

    7/30

    David Scuse September, 20117

    ---------------------------------------------------- The following statements learn the weights-- The statements are executed once for each training pattern-- for a total of Nepochs * Npat learning repetitions--------------------------------------------------iRow=tVECTOR Layer1=INPUT#VECTOR Target=TARGET#VECTOR Layer2=swtch(Weights*Layer1)

    VECTOR Error=Target-Layer2DELTA Weights=Lrate*Error*Layer1DOT SSQ=Error*Errortss=tss+SSQdispt tss---------------------------------------------------- The following statements are executed once per training epoch-- (instead of once per training pattern)--------------------------------------------------SAMPLE Npattss=0 | -- reset tss to zero for next epoch

    Figure 7: SLFF1 OR Patterns Program

    Figure 8: SLFF1 OR Patterns Program Output

  • 8/3/2019 Desire Gettingstarted

    8/30

    David Scuse September, 20118

    In Figure 8, the value of the tss increases as each pattern is processed during an epoch and it isdifficult to determine the actual system error by observing the graph. To simplify the graph, thetss is normally stored in a different variable (such as TSS) which is not modified until the end ofeach epoch.

    ------------------------------------------------------ Single-Layer Feedforward Network (SLFF1b)-- activation function: threshold-- learning rule: delta learning--------------------------------------------------Npat=4 | Ninp=2 | Nout=1ARRAY Layer1[Ninp],Layer2[Nout],Weights[Nout,Ninp]ARRAY Target[Nout],Error[Nout]ARRAY INPUT[Npat,Ninp],TARGET[Npat,Nout]---------------------------------------------------- Define the OR training patterns--------------------------------------------------data 0,0;1,0;0,1;1,1 | read INPUTdata 0;1;1;1 | read TARGET---------------------------------------------------- Initialize Run-Time Variables--------------------------------------------------

    scale=4 | display R | display N-16 | display W 362,0Lrate=0.2 | Nepochs=25---------------------------------------------------- Learn the weights--------------------------------------------------

    tss=0 | TSS=scale

    t=1 | NN=Nepochs*Npat | TMAX=NN-1drunwrite TSSSTOP--------------------------------------------------DYNAMIC---------------------------------------------------- The following statements learn the weights and bias terms-- The statements are executed once for each training pattern-- for a total of Nepochs * Npat learning repetitions--------------------------------------------------iRow=t

    VECTOR Layer1=INPUT#VECTOR Target=TARGET#VECTOR Layer2=swtch(Weights*Layer1)VECTOR Error=Target-Layer2DELTA Weights=Lrate*Error*Layer1DOT SSQ=Error*Errortss=tss+SSQ

    dispt TSS

    ---------------------------------------------------- The following statements are executed once per training epoch-- (instead of once per training pattern)--------------------------------------------------SAMPLE Npat

    TSS=tss

    tss=0 | -- reset tss to zero for next epoch

    Figure 9: SLFF1b OR Patterns Progr