Transcript
Page 1: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

Neural network modelsfor intelligent networks:deriving the location fr om signal patterns

RobertoBattiti, AlessandroVillani, andThangLe Nhat

Universit̀a di Trento,Dipartimentodi Informaticae Telecomunicazionivia Sommarive 14, I-38050Povo (TN), Italy

battiti|avillani|[email protected]

Abstract. The knowledge of the locationof a mobile terminalcanbe usedtoreducethecognitiveburdenon theusersin context–awaresystemsandit is acru-cial informationfor routing in ad–hoc networks andfor optimizingthetopologyin orderto minimize theenergy consumptionof the nodes.Thestrengthsof theRF signalsarriving from moreaccesspointsarerelatedto theposition(locationfingerprinting), but the complexity of the inverseproblemto derive thepositionfrom thesignalsandthelackof completeinformation,motivateto considerflexi-blemodelsbasedonanetwork of functions(neural networks), developedthrougha supervisedlearningstrategy.Theadvantageof themethodis thatit doesnotrequiread-hocinfrastructurein ad-dition to thewirelessLAN, while theflexible modellingandlearningcapabiliti esof neuralnetworksachievesmallerrorsin determiningtheposition,areamenableto incrementalimprovements,anddo not requirethe detailedknowledgeof theaccesspoint locationsandof the building characteristics.The systemdoesnotparticipatein anactive mannerto determinetheposition,thereforeguaranteeinga completeprivacy.Experimentalresultsandcomparisonswith alternative techniquesarepresentedanddiscussed.

Keywords: location- andcontext-awarecomputing,wirelessLAN, IEEE 802.11b,neural networks,machinelearning.

1 Intr oduction

Theresearchin thispaperproposesanew methodto determinethelocationof amobileterminal.Knowledgeof the locationandsuitablemodelsareimportant in order to re-ducethecognitiveburdenontheusersin context- andlocation-awaresystems[7,1,14].Locationawarenessis consideredfor examplein theinfostation-basedhoardingworkof[12], andin thewebsignsystemof [15]. In addition,thelocationof mobileterminalsisrequiredby techniquesfor routing in ad–hocnetworksandfor optimizing thetopologyin orderto minimize theenergy consumption of thenodes,seefor example [13].

Therefore,many different systemand technologies to determine the location ofusersfor mobilecomputing applications havebeenproposed.BesidesGlobalPosition-ing System(GPS),usedverysuccessfullyin open areasbut ineffectiveindoor, thereare

Page 2: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

severalindoor locationtechniquessuchasActiveBadge[19] usinginfraredsignals,Ac-tive Bat[20,9], Cricket[16], usinga combinationof RF andultrasound to estimatethedistance,PinPoint3D-iD[21],RADAR[2], Nibble[6] andSpotON[10], usingRF-basedlocationdetermination.

Thecurrentpaperconsidersahigh-speedwirelessLAN environment usingtheIEEE802.11bstandard. Themethod is basedonneural network modelsandautomatedlearn-ing techniques.As it is thecasefor theRADAR system,nospecial-purposeequipmentis neededin addition to the wirelessLAN, while the flexible modellingandlearningcapabilitiesof neuralnetworks achieve lower errors in determining the position, areamenable to incrementalimprovements,anddo not requirethedetailedknowledgeoftheaccesspoint locationsandof thebuilding characteristicsin additionto a mapof theworking space.Preliminary resultsof this work arepresentin [5].

The following part of this paperis organizedas follows. Section2 describes themethodology for modelling the input-output relationship through multi-layer percep-tronneuralnetworks,Section3describesthesystemandthecollectionof datapointsfortheexperiments,Section4 analyzesthedistanceerror resultsobtained from theneuralnetwork andSection5 discussestheresultsobtained by usingthek-nearest-neighborsmethod. Theeffects of thesignalvariability on thepositionerror areassessedin Sec-tion 6.

2 Methodology: Modelsbasedon neural networks

In our systemthesignalstrengthsreceivedat a mobile terminalfrom differentaccesspoints(at leastthree)areusedto determinethepositionof theterminal insideaworkingarea.Thestartingpoint of the method is the relationship betweendistanceandsignalstrengthfrom a givenaccesspoint.Detailedradiopropagationmodelsfor indoor envi-ronmentsareconsideredfor example in [2]. If oneknowsdistances

���from themobile

terminal to at leastthreedifferent APs, onecancalculatethe positionof the mobileterminalin thesystem.

In a variegatedandheterogeneous environment, e.g.insidea building or in a com-plex urban geometry, the received power is a very complex function of the distance,thegeometry of walls, the infrastructurescontained in thebuilding. Evenif a detailedmodelof the building is available,solving the direct problem of deriving the signalstrengthgiven the locationrequiresa lengthysimulation.The inverseproblem,of de-riving the locationfrom thesignalstrengths is morecomplicatedandvery difficult tosolve in realisticsituations.

Neural network modelsandautomatedlearning techniquesareaneffectivesolutionto estimatethelocationandto reducethedistanceerror. Thenon-lineartransformationof eachunit anda sufficiently large numberof freeparametersguaranteethata neuralnetwork is capableof representing therelationshipbetweeninputs(signalstrengths)andoutputs (position). Let us notethat the distancefrom the accesspoints, andthereforethedetailedknowledgeof theirposition, is not requiredby thesystem:ausermaytrainandusethemethodwithoutaskingfor this information.

Thespecificationof thefreeparametersof themodel(alsocalled”weights” of thenetwork) requiresa learningstrategy thatstartsfrom a setof labelledexamplesto con-

Page 3: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

LayerHidden

AP1’s Signals AP2’s Signals AP3’s Signals

X Y

Output Layer

Input Layer

Fig.1. An exampleof multi-layerperceptronconfiguration

structamodelthatwill thengeneralizein anappropriatemannerwhenconfrontedwithnew data,notpresent in thetrainingset.

We considerthe“standard” multi-layerperceptron (MLP) architecture, seefor ex-ampleFig. 1, with weightsconnecting only nearby layersand the sum-of-squared-differencesenergy function defined as:

������� ���������

� � ����������

��� ������� �������� (1)

where� � and � � arethetarget andthecurrent output valuesfor pattern , respectively,

asa functionof thenetwork weights�

.Thearchitecture of themulti-layerperceptron is organizedasfollows: thesignals

flow sequentiallythroughthedifferent layersfrom theinput to theoutputlayer. Foreachlayer, eachunit (“neuron”) first calculatesa scalarproduct betweena vector of weightsandthe vectorgiven by the outputs of the previous layer. A transfer function is thenappliedto theresultto producetheinput for thenext layer. Thetransfer function for thehiddenlayersis thesigmoidalfunction: ! �#"$�% �'& � �)(+*-,$. � , while for theoutput layerit is theidentity function,sothattheoutput signalis notbounded.

It hasbeendemonstratedthata network with a singlehidden layer is sufficient toapproximateany continuousfunctionto adesiredaccuracy, providedthatthenumberofhiddenneuronsis sufficiently large[11]. In thiswork weconsiderasingle-hidden-layerMLP anda training technique that usessecond-derivativesinformation: the one-step-secantmethod with fast line searchesOSSintroducedin [3,4]. The one-step-secantmethodOSSis a variationof what is calledone-step(memory-less)Broyden-Fletcher-

Page 4: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

Goldfarb-Shanno method, see[17]. TheOSSmethodis describedin detailandis usedfor multi-layerperceptronsin [3] and[4].

3 Systemand experimental setup

Our systemconsistsof a wirelessLocal Area Network basedon the IEEE 802.11bstandard. It is locatedonthefirst floorof a3-storeyedbuilding.Thefloorhasdimensionsof 25.5 / x 24.5 / , for a totalareaof 624.75 / � andincludesmorethanelevenrooms(officesandclassrooms).

The threeaccesspointsareAVAYA WP-II E modelby Lucent Technologies,twowith external antennas.ThewirelessstationsarePentium-basedlaptopcomputersrun-ning Linux version 7.2.Eachlaptopis equippedwith theORiNOCOPCcard- a wire-lessnetwork interfacecardby LucentTechnologies.

Thenetwork operatesin the2.4GHz license-freeISM bandandsupports dataratesof 1, 2, 5.5, and11Mbps.The 2.4 GHz ISM bandis divided into 13 channels(IEEE& ETSI WirelessLAN Standard) andonly threechannelsareused:channel 1, 7, 13 at2412, 2442 and2472MHz respectively, in orderto minimizetheinterference.

Fig.2. Thefloor layoutof theexperiment,with accesspointslocations.

To facilitate the collection of labelledexample patterns,the map of the areaisstoredon a laptopanda userinterfacehasbeendesignedbasedon a singleclick onthedisplayedmap.Theorigin of thecoordinatesystem(0,0) is placedat the left bot-tom corner of themap.The

�#")0213�coordinatesof theaccesspointsareasfollows: AP14�6537 898 / 0 �;: 7 <98 / � , AP2

4� � 5 / 0=537?> / � , AP3@� � >37 < / 0=A37 8 / � .

Page 5: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

Whenthe useris at an identifiable position in the experimentalarea(e.g.,at theentranceof aroom,closeto acorner, closeto acolumn, etc.)heclicksonthedisplayedmapin a point corresponding to the current position. Immediately after the click, thethreereceivedradiosignalstrengthsfrom theAPsareautomaticallymeasuredandtheyaresaved togetherwith the point’s coordinatesin a file, to prepare the examples fortrainingandtestingtheneural network.

A totalof 194measurement pointsarecollectedduring different periodsof theday.

4 Selectionof the multi-lay er perceptr on architectur e

A labelledtraining set(given by inputs signalsandcorresponding output locations) isusedby the OSSlearningalgorithmto determine the free parametersof the flexibleMLP architecture. Themeasureof theerroron the training setgivenby eq.1 is mini-mizedby OSSduringthelearningprocedure.

It is essentialto notethattheobjectiveof thetrainingalgorithm is to build a modelwith goodgeneralization capabilities whenconfrontedwith new input values,valuesnot presentin the trainingset.Thegeneralizationis relatedboth to thenumber of pa-rametersandto thelengthof thetrainingphase.In general, anexcessivenumberof freeparametersandan excessively long trainingphase(over-training) reduce the trainingerrorof eq.1 to smallvaluesbut prejudice thegeneralization:thesystemmemorizesthetrainingpatternsanddoesnotextracttheregularitiesin thetaskthatmakegeneralizationpossible.

Thetheoreticalbasisfor appropriategeneralizationis describedby thetheory of theVapnik - Chervonenkis(VC) dimension[18]. Unfortunately, theVC dimension is noteasilycalculatedfor a specificproblem andexperimentationis often the only way toderiveanappropriatearchitecture andlengthof thetraining phasefor a given task.

Thepurposeof theexperimentsin thissectionis todeterminethearchitecture,in ourcasethenumberof hiddenunits,andthelengthof thetrainingphaseleading to thebestgeneralizationresults.Fig. 3 describesa significantsummary of theresultsobtainedintheexperiments.

Thethreearchitecturesconsideredaregivenby 4, 8, and16 hiddenunits.A setoflabelledexamples(signalstrengthsandcorrectlocation)hasbeencollectedasdescribedin Sec.3.Amongall examples,140areextractedrandomly andareusedfor thetrainingphase,the remaining ones(54) are usedto test the generalization,at different stepsduring thetrainingprocess.Theplottedvalueis theaverageabsolutedistanceerror B �over all patterns:

B �4C�D�������

E ���F" � ��� " � �#��2� � ( ���F1 � ��� 1 � �#��2� � (2)

where � " � �#�� and � 1 � ���� arethe"

and1

coordinatesobtainedby the network and�F" � and�F1 � the correct”target” values.

Dis the number of testor training patterns,

dependingonthespecificplot.Both thetrainingerrorandthegeneralization error areshown in eachfigure. As ex-

pected,the training errordecreasesduring training, while thegeneralization error first

Page 6: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000

Ave

rag

e ab

solu

te e

rro

r (m

eter

s)

G

Number of iterations

test errortraining error

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000

Ave

rag

e ab

solu

te e

rro

r (m

eter

s)

G

Number of iterations

test errortraining error

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000

Ave

rag

e ab

solu

te e

rro

r (m

eter

s)

G

Number of iterations

test errortraining error

Fig.3. Training andtesterror for architectureH�IKJLINM (top), H�IKOPIQM (middle), H�IRTS IUM (bottom).

Page 7: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

decreases,thenreachesaminimum valueandfinally tendsto increase(over-training ef-fect).Theover-training effect is particularly strongfor thearchitecture with 16 hiddenunits.Thebestgeneralization values(of about1.80meters)arereachedafterapproxi-mately3,000 iterationsfor boththe8 and16hidden unitsarchitectures.

Therobustnessof theMLP model for differentarchitecturesandfor different lengthsof the trainingphaseis to benoted.Whenthe architecture changesfrom 4 to 8 to 16hiddenunits,theoptimalgeneralizationvaluechangesonlyby lessthan14%(from 1.99metersto about1.75meters).Whenthe number of iterationsincreasesfrom 2,000 to50,000 thegeneralization error remainsapproximatelystablefor themorecompactar-chitecture(4 hiddenunits),andincreasesslowly for thearchitectureswith morehiddenunits,asexpectedbecauseof thelargerflexibility of thearchitecture.

After this seriesof tests,the architecture 3 V 16 V 2 achievesthebestresult(intheexamplesof Fig. 3 we obtaina resulton thetestsetof 1.75meters).Thestructureof theneural network usedin thesubsequent testsconsistsof threelayers:3 inputunits,16 hiddenlayerunitsand2 outputs.TheCPUtime for a singletraining session(3,000iterationswith 194 examples)on the architecture 3 V 16 V 2 is of 21 seconds on a1400MHz PentiumIV.

To studytheeffectof thenumberof training exampleson thegeneralizationerroraseriesof testshasbeenexecuted.For eachtest,a specifiednumber of examplesareex-tractedrandomly from theentiresetandusedfor training,while theremaining onesareusedto testgeneralizationaftertrainingis completed.For agivennumberof examples,100testshavebeenexecuted.

1

2

3

4

5

6

7

8

0 50 100 150 200

Ave

rag

e te

st e

rro

r (m

eter

s)

W

Number of examples

"average"

Fig.4. MLP: reductionof averagedistanceerror(test)asa functionof numberof examplesin thetrainingset(with 16 neuronsin thehiddenlayer):averageandindividual results.

Thereduction of theaverage distanceerror(test)asa function of number of exam-plesin thetraining set(with 16neuronsin thehiddenlayer)is illustratedin Fig. 4. The

Page 8: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

averageerror decreasesrapidlyasafunctionof thenumberof examples,to reachafinalaverage valueof 1.91.The standarddeviation of the individual resultsis largerwheneitherthenumberof trainingexamplesor thenumberof testexamplesis verysmall.

In order to usethelargestnumberof examplesfor training,theleave-one-out tech-niquehasbeenadoptedfor afinal test.In thiscase,all examplesapartfromoneareusedfor training, while thegeneralizationis testedon thesingleleft-out example. Theex-periment is repeatedby considering eachpatternin turnastheleft-out example. Whenusinganarchitecturewith 16neuronsin thehidden layertheaverageresultof theleave-one-out testis of 1.82 meters.

5 Comparisonwith K-Nearest-Neighbors

The X -nearest-neighborsrule is a well known non-parametric technique. For the caseof classification,the nearest-neighbor rule classifiesa patternwith the samelabel astheoneof theclosestlabelledsample:theprocedureis suboptimal but, with unlimitednumber of samplestheerror rateis never worsethantwice theminimum (Bayes)errorrate[8]. In thecontext of regressionto obtainthelocation,the X -nearest-neighborsrule(a generalization thatconsidersX insteadof thesinglenearestneighbor)hasbeenusedin [2].

Becauseit is difficult to comparetheresultsobtained in differentexperimentalen-vironments,thek-nearest-neighbors rule in thesignalspacehasbeenusedto classifythesameexamplesusedfor theprevioustests.Thismethodcomputesthedistancesbe-tweenthevectorof thereceived signalsatacertainpointandthevectors of signalsof asetof pointswith known locations.Thevectorshaving the X smallestdistancesarecon-sideredandtheestimatedpositionis computedasanaverageof thepositionsassociatedwith thesmallest-distancevectors.

Thesamedatasetusedfor MLP is divided into two disjointedsetsof examples:onefor training andonefor testing.

In thetraining setweconsiderboththesignalstrengths andtheposition:

�#YZD �� 0�YD �� 0=YZD �[ 02" � 021 � �If the training setconsistsof \ examples,in orderto compute thepositioncorre-

sponding to a triplet of signalstrengths�#YD � 0�YD � 0�YD [ � takenfrom thetestset,one

proceedsasfollow:

– For each ]_^ � 7`7�7 \ , compute the distance� �

between�6YZD �� 0=YZD �� 0=YZD �[ � and�#YD � 0�YD � 0�YD [ �

– Find the X smallestdistances(with X fixed):� �ba�7`7�7 � �dc

– Computetheestimatedpositionastheaverageof thepositions recordedfor the Xexamplesselectedat thepreviousstep:

"egf ch6i a .;j hk ,1Plf chmi aon j hk

ThestandardEuclideandistance�-�j is used:

�p��jq�#YD �� � YZD � � � ( �6YZD �� � YD � � � ( �#YD �[ � YZD [ � �

Page 9: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

Insteadof thesimpleaverage, a weightedaverage canbeadopted. In this casethepositionsareweightedaccording to the inverseof the distance(unlessthedistanceiszero,in whichcasetheweightis 1,000). Theestimatedpositionis now computedas:

"rlskt �u� " � h �v j hskt ��� �v j h

0w1Plskt �u� 1 � h �v j hskt �u� �v j h

To analyze theeffect of thenumberof neighbors X on theerror, the leave-one-outtechnique is usedon thewholedataset.Fig. 5 shows theaverage error resultsfor theweightedandstandardmethods.In bothcases,thebestresultis obtainedfor X x8 . Theaverage distanceerror is 1.81 meterswhenusingthenormalaverageand1.78 metersfor theweightedaveragecase.Theweightedmethodachievesbetterresultsandis morestablewhentheparameter X increases:theeffect of far away neighborson theaveragebecomeslessdangerous,asexpected.

1.6

1.8

2

2.2

2.4

0 2 4 6 8 10 12 14 16

Ave

rag

e te

st e

rro

r (m

eter

s)

W

Number of neighbors

standard averageweighted average

Fig.5. K-NN: reductionof averagedistanceerror (test)asa function of number of neighbors:standardandweightedaverage.

A seriesof testshasbeenexecuted to study the averagedistanceerror whenthenumber of theexamplesusedfor thetrainingsetis changed,in thesamemanner asinSec.4. As it is shown in Fig. 6 theerrordecreasesfrom 4.5metersby using10trainingexamples, to lessthan1.90 metersby usingmore than170 examples. With only 50examplestheaveragedistanceerroris alreadyof 2.38meters.

6 Effect of SignalVariability on Location Err ors

The signal strength variability at a fixed positioncausesa locationerror that cannotbeeliminatedif a singlemeasureis executed.This error canbeconsideredasa lower

Page 10: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

1

2

3

4

5

6

7

8

0 50 100 150 200

Ave

rag

e te

st e

rro

r (m

eter

s)

W

Number of examples

"average"

Fig.6. K-NN: reductionof averagedistanceerror (test)asa functionof number of examplesinthetrainingset(using6 neighbors).

bound, unlessmorethanonesignalmeasureis executed(for example by averaging orby considering a moving average).

In orderto verify thevariability during thedayof thesignalstrength andthecor-responding variability of thepositionestimation,a mobile terminalwasleft at a fixedpositionto acquirethesignalsreceivedfromtheaccesspointsduring theday. At theenda total of 11,625valuesof thesignalsfor thesamepositionwerecollected,although,becauseof quantization, only 465different tripletsof signalsexist in thedataset.Thevalueswereacquiredduringa working daywhenmany people moved in the test-bedarea.

After using the sametraining setas in Sec.4–Sec.5, the positionof the mobileterminal is evaluatedthrough both MLP and X -nearest-neighbors. Fig. 7–8 show theobtained positions andtheir distribution, by counting how many positionsfall in eachbin. Fig. 7 shows the resultobtained with MLP. The error in the positionestimationcausedby signalfluctuationis ratherlimited,with 0.359 metersstandarddeviation.TheFig. 8 describes theresultfor X -nearest-neighborscase.Thestandarddeviation for thedistanceis now 0.392meters,a little higher thanthestandarddeviation for MLP.

Bothresultsindicateasignificantgapbetweentheerror obtainedwith thepresentedtechniquesandthe estimateof the locationvariability causedonly by signalfluctua-tions.Additionalworkshouldthereforeconsiderdifferentandmorepowerful modellingtechniquesaimingat a possibleadditional reduction in thedistanceerror.

7 Conclusion

A new model basedon MLP neuralnetworks to determinethe positionof a mobileterminalin a wirelessLAN context hasbeenpresented. Thetechniquehasbeenevalu-atedin a real-world test-bedconsistingof accesspointsandmobileterminalsusingthe

Page 11: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

DistributionData

2 3

4 5

6 7

8 9

Meters 2

2.5 3

3.5 4

4.5 5

5.5 6

6.5 7

Meters

0

1000

2000

3000

4000

5000

Count

Fig.7. Scatteringof positioningusingMLP with 16 neuronsin thehiddenlayer.

DistributionData

2 3

4 5

6 7

8 9

Meters 2

2.5 3

3.5 4

4.5 5

5.5 6

6.5 7

7.5

Meters

0

1000

2000

3000

4000

5000

Count

Fig.8. Scatteringof positioningusing6 nearestneighbors.

Page 12: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

IEEE802.11bstandard.In addition, acomparisonwith analternativemethodbasedonX -nearest-neighbors hasbeenexecuted in the sameexperimentalcontext. The resultsobtained with theMLP neural network (1.82meters)arecomparableto thebestresultsobtained with the X -nearest-neighborswhentheparameter X is optimizedfor thecurrentcontext (1.81 metersfor thestandard average,1.78metersfor theweightedaverage).

The fact that a basicmethod like the X -nearest-neighbors achievessimilar resultsis an indication that the complex dependenciesbetweensignalsandpositions arenoteasilymodelledby anMLP neural network.

Thepresentedmethodsarenotassociatedto thespecificconsideredtechnology. Weplanto continuethis work by considering differenttechnologiesandcontexts (e.g., theBluetoothprotocol) anddifferentnon-parametricmodelling techniquesthatmaylowerthepositionerrors to valuescloserto theerrors causedby thesignalfluctuations.

Acknowledgments

We acknowledgethefinancialsupport of theWirelessInternetandLocation Manage-mentArchitecture(WILMA) project,Universityof TrentoandICT-irst, andthehelpofMauroBrunato.

References

1. J. Anhalt, A. Smailagic,D. P. Siewiorek, F. Gemperle,D. Salber, S. Weber, J. Beck, andJ. Jennings. Toward context-awarecomputing: Experiencesand lessons. IEE IntelligentSystem, 3(16):38–46, May-June2001.

2. ParamvirBahl andVenkataN. Padmanabhan.RADAR: An in-building RF-baseduserloca-tion andtrackingsystem.In IEEEINFOCOM2000, pages775–784,March2000.

3. R. Battiti. Acceleratedback-propagation learning:Two optimizationmethods. ComplexSystems, 3(4):331–342,1989.

4. R. Battiti. First-andsecond-ordermethodsfor learning:Betweensteepestdescentandnew-ton’s method.Neural Computation, 4:141–166,1992.

5. R. Battiti, T. Le Nhat,andA. Vill ani. Location–aware computing:a neuralnetwork modelfor determininglocationin wirelessLANs. TechnicalReport5, Dipartimentodi Informaticae Telecomunicazioni, Unversita’di Trento,Feb2002.

6. P. Castro,P. Chiu, T. Kremenek,andR. Muntz. A probabilisticroom locationserviceforwirelessnetworked environments. In Ubicomp2001:UbiquitousComputing, pages18–34.Springer, 2001.

7. G.M. Djuknic and R.E. Richton. GeolocationandassistedGPS. IEEE Computer, pages123–125,February2001.

8. R.O.DudaandP. E. Hart. PatternClassificationandScheneAnalysis. JohnWiley andSons,1973.

9. Andy Harter, Andy Hopper, PeteSteggles,Andy Ward,andPaulWebster. Theanatomyof acontext-awareapplication.In MOBICOM1999, pages59–68,August1999.

10. Jeffrey Hightower, GaetanoBorriello, and Roy Want. SpotON:An Indoor 3D LocationSensingTechnology Basedon RFSignalStrength. TheUniversityof Washington,TechnicalReport:UW-CSE2000-02-02,February2000.

11. Kurt Hornik. Approximationcapabilitiesof multilayer feedforward networks. Neural Net-works, 4:251–257,1991.

Page 13: Neural network models for intelligent networks: deriving ... · It is essential to note that the objective of the training algorithm is to build a model with good generalization capabilities

12. U. KubachandK. Rothermel.Exploiting locationinformationfor infostation-based hoard-ing. In SeventhAnnual International Conference on Mobile Compuitingand Networking,Rome, pages15–27, 2001.

13. M. A. Marsan,C-F. Chiasserini,A. Nucci, G. Carello,andL. D. Giovanni. Optimizing thetopology of bluetoothwirelesspersonalareanetworks. In Proceedingsof IEEE Infocom2002,in press, 2002.

14. A. Misra,S.Das,A. McAuley, andS.K. Das. Autoconfiguration,registrationandmobilitymanagement for pervasive computing. IEEE PersonalCommunications(SpecialIssueonPervasiveComputing), 8(4):24–31, Aug 2001.

15. S.Pradhan,C.Brignone,J.-H.Cui,A. McReynolds,andM.T. Smith.Websigns:hyperlinkingphysicallocationsto theweb. IEEEComputer, 34(8):42–48,August2001.

16. NissankaB. Priyantha,Anit Chakraborty, and Hari Balakrishnan. The cricket location-supportsystem.In MOBICOM2000, pages32–43, August2000.

17. D. F. Shanno.Conjugategradientmethodswith inexactsearches.Mathematicsof OperationsResearch, 3(3):244–256,1978.

18. V.N. Vapnik andA. Ja.Chervonenkis. On theuniform convergenceof relative frequenciesof eventsto their probabilities.TheoryProbab. Apl., 16:264–280,1971.

19. Roy Want,Andy Hopper, VeronicaFalcao,andJonathanGibbons. Theactivebadgelocationsystem.ACM Transactionon InformationSystems, 10(1):91–102,January1992.

20. Andy Ward,Alan Jones,andAndy Hopper. A new locationtechniquefor theactive office.IEEEPersonal Communications, 4(5):42–47, October1997.

21. JayWerb andColin Lanzl. Designinga positioningsystemfor finding thingsandpeopleindoors.IEEESpectrum, 35(9):71–78, September1998.


Recommended