Upload
others
View
8
Download
0
Embed Size (px)
Citation preview
Research ArticleComparison of Neural Network Error Measures forSimulation of Slender Marine Structures
Niels H Christiansen12 Per Erlend Torbergsen Voie3 Ole Winther4 and Jan Hoslashgsberg2
1 DNV Denmark AS Tuborg Parkvej 8 2900 Hellerup Denmark2Department of Mechanical Engineering Technical University of Denmark 2800 Kongens Lyngby Denmark3Det Norske Veritas 7496 Trondheim Norway4Department of Applied Mathematics and Computer Science Technical University of Denmark 2800 Kongens Lyngby Denmark
Correspondence should be addressed to Niels H Christiansen nielschristiansendnvcom
Received 19 November 2013 Accepted 21 January 2014 Published 2 March 2014
Academic Editor Ricardo Perera
Copyright copy 2014 Niels H Christiansen et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited
Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefinederror measure This error measure is given by an error function Several different error functions are suggested in the literatureHowever the far most common measure for regression is the mean square error This paper looks into the possibility of improvingthe performance of neural networks by selecting or defining error functions that are tailor-made for a specific objective A neuralnetwork trained to simulate tension forces in an anchor chain on a floating offshore platform is designed and tested The purposeof setting up the network is to reduce calculation time in a fatigue life analysis Therefore the networks trained on different errorfunctions are compared with respect to accuracy of rain flow counts of stress cycles over a number of time series simulations It isshown that adjusting the error function to perform significantly better on a specific problem is possible On the other hand it isalso shown that weighted error functions actually can impair the performance of an ANN
1 Introduction
Over the years oil and gas exploration has moved towardsmore and more harsh environments In deep and ultra-deepwater installations the reliability of flexible risers and anchorlines is of paramount importance Therefore the design andanalysis of these structures draw an increasing amount ofattention
Flexible risers and mooring line systems exhibit largedeflections in service Analysis of this behavior requireslarge nonlinear numerical models and long time-domainsimulations [1 2] Thus reliable analysis of these types ofstructures is computationally expensiveOver the last decadesan extensive variety of techniques and methods to reducethis computational cost have been suggested A review ofthe most common concepts of analysis is given in [3] Onemethod that has shown promising preliminary results is ahybrid method which combines the finite element method(FEM) and artificial neural network (ANN) [4] The ANN
is a pattern recognition tool that based on sufficient trainingcan perform nonlinear mapping between a given input and acorresponding output The reader may consult for exampleWarner and Misra [5] for a fast thorough introduction toneural networks and their features The idea with the hybridmethod is to first perform a response analysis for a structureusing a FEM model and then subsequently to use theseresults to train an ANN to recognize and predict the responsefor future loads As demonstrated by Ordaz-Hernandez etal [6] an ANN can be trained to predict the deformationof a nonlinear cantilevered beam A similar approach wasused by Hosseini and Abbas [7] when they predicted thedeflection of clamped beams struck by a mass In connectionwith analysis of marine structures Guarize et al [8] haveshown that a well-trained ANN with very high accuracycan reduce dynamic simulation time for analysis of flexiblerisers by a factor of about 20 However a problem with thismethod is that an ANN can only make accurate predictions
Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2014 Article ID 759834 11 pageshttpdxdoiorg1011552014759834
2 Journal of Applied Mathematics
based on sufficiently known input patterns This means thata network trained on one type of load pattern will havedifficulties in predicting the response when the structure isexposed to different types of loading conditions Recentlya novel strategy for arranging the training data has beenproposed by Christiansen et al [9] where the idea is toselect small samples of simulated data for different sea statesand collect these in one training sequence With a properselection of data an ANN can be trained to predict tensionforces in a mooring line on a floating offshore platform fora large range of sea states two orders of magnitude fasterthan the corresponding direct time integration scheme Ithas been shown how computation time when conductingthe simulations associated with a full fatigue analysis on amooring line system on a floating offshore platform can bereduced from about 10 hours to less than 2 minutes
Training of an ANN corresponds to minimizing a pre-defined error function Several studies on the efficiency ofusing different objective functions for ANN training havebeen conducted over the last decades Accelerated learning inneural networks has been obtained by Solla et al [10] usingrelative entropy as the error measure and Hampshire andWaibel [11] presented an objective function called classifica-tion figure of merit (CFM) to improve phoneme recognitionin neural networks A comparative study performed byAltincay and Demirekler [12] showed that the CFM objectivefunction also improved the performance of neural networkstrained as classifiers for speaker identification Howeverthese references all consider networks used for classificationwhereas the one used in this paper is trained to performregression between time-continuous variables Hence theproblem studied in this paper calls for different cost functionsfrom those used for neural classifiers
This study evaluates and compares four different errorfunctions with respect to ANN performance on the fatiguelife analysis of the same floating offshore platform as usedin [9] A numerical model of a mooring line system on afloating platform subject to current wind and wave forcesis established The model is used to generate several 3-hourtime domain simulations at seven different sea states with2m 4m and 14m significant wave height respectivelyThe generated data is then divided into a training set and avalidation setThe training set consists of series of simulationsat only the sea states with 2m 8m and 14m wave heightThe remaining part is then left for validation of the trainedANN The full numerical time integration analysis is carriedout by the two tailor-made programs SIMO [13] and RIFLEX[14] while the neural network simulations are conducted bya small MATLAB toolbox
2 Artificial Neural Network
The artificial neural network is a pattern recognition toolthat replicates the ability of the human brain to recognizeand predict various kinds of patterns In the following anANNwill be trained to recognize and predict the relationshipbetween the motion of a floating platform and the resultingtension forces in a specific anchor chain
21 Setting Up the ANN The architecture of a typical onelayer artificial neural network is shown in Figure 1 The ANNconsists of an input layer where each input neuron representsa measured time discrete state of the system In the presentcase in Figure 1 the neurons of the input layer represent thesixmotion components (119909 119910 ) of the floating platform andthe previous time discrete anchor chain tension force (119879past)The input layer is connected to a single hidden layer which isthen connected to the output layer representing the tensionforce (119879) Two neurons in neighboring layers are connectedand each of these connections has a weightThe training of anANN corresponds to an optimization of these weights withrespect to a particular data training set The accuracy andefficiency of the network depend on the network architecturethe optimization of the individual weights and the choice oferror function used in the applied optimization procedure
The design and architecture of the ANN and the sub-sequent training procedure follow the approach outlined in[15] Assume that the vectors x y and z contain the neuronvariables of the input layer output layer and hidden layerrespectively The output layer and hidden layer values can becalculated by the expressions
y = W⊤119874z z = tanh (W⊤
119868x) 119909
0equiv 119911
0equiv 1 (1)
whereW119868andW
119874are arrays that contain the neuron connec-
tion weights between the input and the hidden layer and thehidden and the output layer respectively By setting 119909
0and 1199110
permanently to one biases in the data can be absorbed intothe input and hidden layer The tangent hyperbolic functionis used as an activation function between the input and thehidden layer A nonlinear activation function is needed inorder to introduce nonlinearities into the neural networkThe tangent hyperbolic is often used in networks of this typewhich represent a monotonic mapping between continuousvariables because it provides fast convergence in the networktraining procedure see [16]
The optimal weight components of the arrays W119868and
W119874are found by an iterative procedure where the weights
are modified to give a minimum with respect to a certainerror function The updating of the weight components isperformed by a classic gradient decent technique whichadjusts the weights in the opposite direction of the gradientof the error function [17] For the ANN this gradient decentupdating can be written as
Wnew = Wold + ΔW ΔW = minus120578
120597119864 (W)
120597W
(2)
where 119864 is a predefined error function and 120578 is the learningstep size parameter This parameter can either be constant orupdated during the training of the ANN For the applicationsin the present paper the learning step size parameter isdynamic and will be adjusted for each iteration so that itis increased if the training error is decreased compared toprevious iteration steps and reduced if the error increases
22 Error Functions As mentioned above in Section 21 thetraining of anANNcorresponds tominimizing the associatedmeasure of error represented by the predefined error function
Journal of Applied Mathematics 3
Surge (x)
Sway (y)
Heave (z)
Roll (120572)
Pitch (120573)
Yaw (120574)
T
WI
WO
BiasBias
Input
Hidden
Output
Tpast
Figure 1 Sketch of artificial neural network for predicting top tension force in mooring line
119864 The literature suggests many choices of error functions[16]
The simplest and most commonly used error functionin neural networks used for regression is the mean squareerror (MSE) However the purpose of the present ANN is tosignificantly reduce the calculation time for a fatigue analysisof the marine type structure And since the large amplitudestresses contribute far the most to the accumulated damageof the mooring lines it is of interest to investigate how adifferent choice of error measure will affect the accuracyand efficiency of the fatigue life calculations Four differenterror functions are therefore tested and compared to the fullnumerical solution obtained by time simulations using theRIFLEX code
The comparison is based on the so-called Minkowski-Rerror
119864 =
1
119877
sum
119899
119888
sum
119896=1
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877
(3)
where 119910 is the scalar ANN output and 119905 is the target valueThe classic MSE is seen to be a special case of the Minkowskierror with 119877 = 2 In many situations the performance andaccuracy of the ANN are equally important regardless ofthe magnitude of the actual output However when dealingwith analysis of structures this is not always the case Forexample the purpose of the ANN in the present paper isto simulate the top tension force time history in a mooringline which is subsequently used to evaluate the fatigue life ofthe line And since by far the most damage in the mooringline is introduced by large amplitude stress cycles the ANNinaccuracy on large stresses is much more expensive thanerrors on small and basically unimportant amplitudes Oneway to specifically emphasize large amplitudes is to increasethe 119877-value in (3) Another and more direct way to placeadditional focus on the importance of large stress amplitudesis by multiplying each term in (3) by the absolute value of
the target values This yields the following weighted errorfunction
119864
119908=
1
119877
sum
119899
119888
sum
119896=1
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877
sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
119877
(4)
The performance of a trained ANN is usually measuredin terms of the so-called validation error which is calculatedin the same way as the training error but on entirely newdata set that has not been part of the network training Thismeans that when comparing networks that have been trainedusing different error functions the validation error is thecommonmeasure to assess performance in terms of accuracyand computational effort Obviously the various networksconsidered in the following could be tested and comparedagainst any of the error functions that the networks havebeen trained by But that would definitely favor the particularANN that has been trained by the specific error function thatis chosen as the validation measure And since the ultimateobjective of the ANN is to predict the fatigue life of themooring line it is appropriate to calculate and compare theaccumulated damage in the mooring line caused by all sevensea state realizations previously mentioned in Section 1
23 Network Training In (2) the steepest decent correctionof the weight vector for the training of the network requiresthe first derivative of the error function 119864 with respect to theweight arrays W Differentiation of (3) with respect to thecomponents of the two weight matricesW
119868andW
119874yields
119889119864
119889119882
119874119896119895
= sum
119899
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
119911
119895
119889119864
119889119882
119868119895119894
= sum
119899
((1 minus 119911
2
119895)
119888
sum
119896=1
119908
119896119895
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
)119909
119894
(5)
These gradients are now inserted into (2) and therebygovern the correction of the weights for each iteration step in
4 Journal of Applied Mathematics
the training procedure Similar differentiation of theweightederror function (4) yields
119889119864
119908
119889119882
119874119896119895
= sum
119899
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
119911
119895sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
119889119864
119908
119889119882
119874119895119894
= sum
119899
((1 minus 119911
2
119895)
119888
sum
119896=1
119908
119896119895
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
)119909
119894sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
(6)
The above equations are implemented into the trainingalgorithm and tested for the two power values 119877 = 2 and 119877 =
4 This gives a total of four different error functions which inthe following will be denoted as
(i) 119864
2 unweighted error function with 119877 = 2
(ii) 1198641199082 weighted error function with 119877 = 2
(iii) 1198644 unweighted error function with 119877 = 4
(iv) 1198641199084 weighted error function with 119877 = 4
It should be noted that the first case 1198642represents the classic
MSE function
3 Application to Structural Model
The structure used as the basis for the comparison of thedifferent error functions is shown in Figure 2 It consists ofa floating offshore platform located at 105m water depthwhich is anchored by 18 mooring lines assembled in fourmain clusters The external forces acting on the structure areinduced by waves current and wind
31 Structural Model In principal the dynamic analysis ofthe platform-mooring system corresponds to solving theequation of motion
M (r) r + C (r) r + K (r) r = f (119905) (7)
In this nonlinear equation r contains the degrees offreedom of the structural model and f includes all externalforces acting on the structure from for example gravitybuoyancy and hydrodynamic effects while the nonconstantmatricesM C and K represent the system inertia dampingand stiffness respectively The system inertia matrixM takesinto account both the structural inertia and the responsedependent hydrodynamic added mass Linear and nonlinearenergy dissipation fromboth internal structural damping andhydrodynamic damping are accounted for by the dampingmatrix C Finally the stiffness matrix contains contributionsfrom both the elastic stiffness and the response dependentgeometric stiffness
The nonlinear equations of motion in (7) couple thestructural response of the floating platform and the responseof themooring lines However the system is effectively solvedby separating the solution procedure into the following steps
First the motion of the floating platform is computed bythe program SIMO [13] assuming a quasistatic catenarymooring line model with geometric nonlinearities The plat-form response from this initial analysis is subsequently usedas excitation in terms of prescribed platform motion in adetailed nonlinear finite element analysis for the specificmooring line with highest tension stresses The location ofthe mooring line with largest stresses is indicated in Figure 3For this specific line the hot-spot with respect to fatigue islocated close to the platform and is in the following referredto as the top tension force From the detailed fully nonlinearanalysis performed by RIFLEX the time history of the toptension force at this hot-spot is extracted
Based on the simulated time histories for both theplatformmotion and the top tension force an ANN is trainedto predict the top tension force in the selected mooring linewith the platform response variables as network input Thisis considered next in Section 32 In [9] a multilayer ANNwas trained to simulate the top tension force two orders ofmagnitude faster than a corresponding numericalmodelThetraining data was set up and arranged so that a single ANNwith a single hidden layer could simulate all fatigue relevantsea states and thereby provide a significant reduction in thecomputational effort associated with a fatigue life evaluationFor clarity the ANN used in this example covers only a fewsea states with different significant wave heights and constantpeak period This gives a compact neural network that isconveniently used to illustrate the influence of changing theerror function in the training of the network
32 Selection of Training Data The ultimate purpose of theANN is to completely bypass the computationally expensivenumerical time integration procedure which in this case isconducted by the RIFLEX model This means that the inputto the neural network must be identical to the input used forthe RIFLEX calculations In this case the input is therefore theplatform motion represented by the six degrees of freedomdenoted in Figure 1 and illustrated in Figure 2 In principlethe number of neural network output variables can be chosenfreely and in fact all degrees of freedom from the numericalfinite element analysis can be included as output variables inthe corresponding ANN However the strength of the ANNin this context is that it may provide only the output variablethat drives the design of the structure which in this case isthe maximum top tension forces in the particular mooringline This leads to a very fast simulation procedure which fora well-trained network provides sufficiently accurate resultsThus the ANN is in the present case designed and trained topredict the top tension of the mooring line and the platformmotion (six motion components surge sway heave rollpitch and yaw) is together with the top tension of previoustime steps used as input to the ANN see Figure 1 Thismeans that the input vector x
119899at time increment 119899 can be
constructed as
x119899= [ [119909
119905119909
119905minusℎsdot sdot sdot 119909
119905minus119889ℎ] [119910
119905119910
119905minusℎsdot sdot sdot 119910
119905minus119889ℎ]
[119911
119905119911
119905minusℎsdot sdot sdot 119911
119905minus119889ℎ] [120572
119905120572
119905minusℎsdot sdot sdot 120572
119905minus119889ℎ]
[120573
119905120573
119905minusℎsdot sdot sdot 120573
119905minus119889ℎ] [120574
119905120574
119905minusℎsdot sdot sdot 120574
119905minus119889ℎ]
[119879
119905minusℎ119879
119905minus2ℎsdot sdot sdot 119879
119905minus119889ℎ] ]
119879
(8)
Journal of Applied Mathematics 5
x
y
z 120572
120573
120574
Figure 2 Mooring system with floating platform and anchor lines
x
y
Traveling direction ofwaves current and wind
Mooring line with highest stress
Figure 3 Mooring line system (top view)
where 119905 = 119899ℎ denotes current time ℎ is the time incrementand 119889 is the number of previous time steps included in theinput that is the model memory The corresponding ANNoutput is the value of the top tension force 119879
119905in the mooring
line
119910
119899= 119879
119905 (9)
Since there is only one network output 119910 is a scalarand not a vector as in (1) For the training of the ANNnonlinear simulations in RIFLEX are conducted for sea stateswith a significant wave height of 119867
119904= 2m 8m and
14m respectively While neural networks are very good atinterpolatingwithin the training range they are typically onlyable to perform limited extrapolation outside the trainingrange Thus the selected training set must contain both theminimum wave height (2m) the maximum wave height(14m) and in this case a moderate wave height (8m) toprovide sufficient training data over the full range of interestWith these wave heights included in the training data theANN is expected to be able to provide accurate time historiesfor the top tension force for all intermediate wave heightsThe seven 3-hour simulation records generated by RIFLEX
are divided into a training set and a validation set The datathat is used for training of the ANN is shown in Figures 4and 5 Figure 4 shows the time histories for the six motiondegrees of freedom of the platform calculated by the initialanalysis in SIMO and used as input to both the full numericalanalysis in RIFLEX and the ANN training and simulationFigure 5 shows corresponding time histories for top tensionforce determined by RIFLEX The full time histories shownin Figures 4 and 5 are constructed of time series for thethree significant wave heights The first 830 seconds of thetraining set represent 2m significant wave height the next830 seconds are for 8m wave height and the remaining partis then for 14m wave height
The SIMO simulations are conductedwith a time step sizeof 05 s In the subsequent RIFLEX simulations the time stepmust be sufficiently small as to keep the associated Newton-Raphson iteration algorithm stable In these simulations atime step size of 01 s is therefore chosen which meansthat the additional input parameters are obtained by linearinterpolation between the simulation values from SIMOFor the ANN the time step size ℎ must be chosen so thatthe network is able to grasp the dynamic behavior of thestructure Therefore in many cases the ANN is capable of
6 Journal of Applied Mathematics
0 500 1000 1500 2000 2500minus50
050
minus200
20
minus202
minus505
minus505
05
10
Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
Surg
eSw
ayH
eave
Roll
Pitc
hYa
w
Figure 4 Platform motion used as ANN training input data
0 500 1000 1500 2000 25001000
2000
3000
Time (t)
Top
tens
ion
(kN
)
Figure 5 Tension force history used as ANN training target data
0 5 10 15 2015
2
25
3
35
Size of hidden layer
times10minus3
Ete
st
Figure 6 Validation error as function of number of neurons in the hidden layer
Journal of Applied Mathematics 7
handling fairly large time step increments compared to thecorresponding numerical models When using a larger timestep in the ANN simulations for example by omitting anumber of in-between data points it is possible to reducethe size of the training data set and thereby reduce thecomputational time used for ANN training and eventuallyalso for the ANN simulation In the example of this paper atime step size of ℎ = 20 s is found to yield a good balancebetween accuracy and computational efficiency and this timestep ℎ is therefore used for the ANN simulations
33 Design of ANN Architecture In the design of the ANNarchitecture three variables are investigated (1) number ofneurons in the hidden layer (2) size of the model memory119889 and (3) required amount of training data When theANN has been trained and ready for use the network sizehas no significant influence on the total simulation timeand computational effort The main time consumer is thetraining part and time used for training of the network highlydepends on network size and the size of the training data setHence it is of great interest to design an ANN architecturethat is as compact and effective as possible
Figure 6 shows a plot of the test error measure 119864testrelative to the number of neurons in the hidden layer of theANN In this section where the three basic ANN variablesare chosen the errormeasure is themean square error (MSE)corresponding to the 119864
2error measure in Section 23 The
curve in Figure 6 furthermore represents the mean value ofthe error based on five simulations while the vertical barsindicate the standard deviation It is seen from this curve thatthe performance and the scatter in performance of the trainedANN is lowest when the hidden layer contains four neuronsTherefore an effective and fairly accurate ANN performanceis expected when four neurons in the hidden layer are usedin the following simulations
Figure 7 shows the test error relative to the modelmemory 119889 which represents the number of previous timesteps used as network input First of all it is found thatincluding memory in the model significantly reduces theerror However it is also seen from the figure that anincrease of the memory beyond four previous steps impliesno significant improvement in the ANN performance Thusa four-step memory that is 119889 = 4 is used in the followingnumerical analyses
For the training of any ANN it is always crucial tohave a sufficient amount of training data in order to coverthe full range of the network and secure applicability withsufficient statistical significance Figure 8 shows the test erroras function of the length of the training data set As for theparameter studies in Figures 6 and 7 the present curve showsthe mean results based on five simulation records To makesure that a sufficient amount of data is used for the training ofthe ANN a total simulated record of 2500 s is included whichcorresponds to approximately a length of 14 minutes for eachof the three sea states It is seen in Figure 8 that this length ofthe simulation record is more than sufficient to secure a lowerror
The trained ANN is able to generate nonlinear outputwithout equilibrium iterations and hence at an often sig-nificantly higher computational pace compared to classicintegration procedures with embedded iteration schemesFigure 9 shows the simulation of the top tension force inthe mooring line calculated by the finite element methodin RIFLEX and by the trained ANN The four subfiguresin Figure 9 represent the four wave heights that were notpart of the ANN training that is 119867
119904= 4m 6m 10m
and 12m For these particular simulation records the trainedANN calculates a factor of about 600 times faster than theFEM calculations by RIFLEX
34 Comparison of Error Measures In the design of theANN architecture presented above the results are obtainedfor an ANN trained with the MSE as objective function orerror measure It is in the following conveniently assumedthat this ANN architecture is valid regardless of the specificchoice of error function Thus the various error measurespresented in Section 23 are in this section compared for theANN with four neurons in the hidden layer four memoryinput variables and a training length of 2500 s
As mentioned earlier some of the pregenerated data aresaved for performance validation of the trained ANN Thesedata are used to calculate a validation error 119864test whichis the measure for the accuracy of the trained networkFigure 10 shows the development in the validation errorduring the network training with all four different errorfunctions present in Section 23 It is clearly seen that all fourerror measures are minimized during training whereas it isdifficult to compare the detailed performance and efficiencyof the four different networks based on these curves
Figure 11(a) summarizes the development of the valida-tion error for the four ANN but this time the validation erroris calculated using the same MSE error measure (119864
2) to give
a consistent basis for comparison Thus the four networkshave been trained with four error measures respectivelywhile in Figure 11(a) they are compared by the MSE Eventhough the networks here are compared on common groundit is still difficult to evaluate how well they will performindividually on a full fatigue analysis Figure 11(b) illustratesthe accuracy of the four networks by showing a close upof a local maximum in the top tension force time historyIt is seen that the two unweighted error functions 119864
2and
119864
4 perform superiorly compared to the weighted functions
Also the unweighted error measures provide a smaller MSEerror in Figure 11(a)This indicates that weighting of the errorfunctions implies no improvement of the performance andaccuracy of the ANN
35 Rain FlowCount Themagnitude of the various test errormeasures is difficult to relate directly to the performance oftheANNcompared to the performance of theRIFLEXmodelSince these long time-domain simulations are often used infatigue life calculations an appropriate way to evaluate theaccuracy of the ANN is to compare the accumulated rain
8 Journal of Applied Mathematics
0 2 4 6 80
0002
0004
0006
0008
001
Number of time steps included in input
Ete
st
Figure 7 Validation error as function of model memory
0 500 1000 1500 2000 25000
002
004
006
008
01
012
Length of simulated training data (s)
Ete
st
Figure 8 Validation error as function of amount of training data
flow counts of the tension force cycles for each significantwave height In these fatigue analyses the RIFLEX resultsare considered as the exact solution For these calculationsthe full 3-hour simulations are used and Figure 12 shows theresults of the rain flow count of accumulated tension forcecycles for each of the significant wave heights Deviationsbetween RIFLEX and ANN simulations are listed in Table 1It should be noted that the deviations for the individual sevensea states do not add up to give the total deviation because theindividual sea states do not contribute equally to the overalldamage It is seen that the various networks perform verywellon all individual sea states and that the best networks therebyobtain a deviation of less than 2 for the accumulated tensionforce cycles when summing up the contributions from allsea states This deviation is a robust estimate and is likely toalso represent the accuracy of a full subsequent fatigue lifeevaluation
It is seen from the rain flow counting results in Figure 12and Table 1 that the neural networks trained with unweighted
Table 1 Deviations on accumulated tension force cycles
119867
119878119864
2119864
119908
2119864
4119864
119908
4
2 minus36 minus163 73 6584 minus69 minus130 48 3206 minus44 minus80 26 1498 minus27 minus48 minus15 8610 minus20 minus35 minus16 6912 minus14 minus31 minus17 5314 minus08 minus03 21 31Total minus17 minus38 16 64
error function in general perform slightly better than thosetrained with weighted error functionsThus placing a weighton the error function does not seem to have the desired effectin this application concerning the analysis of a mooring linesystem for a floating platform
Journal of Applied Mathematics 9
1000 1100 1200 1300 1400 15002100
2200
2300
2400
2500
2600
2700
1600 1700 1800 1900 2000 21002000
2200
2400
2600
2800
2200 2300 2400 2500 2600 27001000
1500
2000
2500
3000
2800 2900 3000 3100 3200 33001000
1500
2000
2500
3000
(kN
)(k
N)
(kN
)(k
N)
ANNRIFLEX
ANNRIFLEX
HS = 6HS = 4
HS = 10 HS = 12
Figure 9 Comparison of simulated top tension forces using ANN and RIFLEX for the four wave heights that were not part of the networktraining
Training iterations
Ete
st
10minus1
10minus2
10minus3
102
103
104
105
E2
Ew
2
(a)
Ete
st
10minus1
10minus2
10minus3
100
10minus4
102
103
104
105
Training iterations
E4
Ew
4
(b)
Figure 10 Validation error during training (a) Error functions with 119877 = 2 1198642 1198641199082 (b) Error functions with 119877 = 4 119864
4 1198641199084
10 Journal of Applied Mathematics
Training iterations
Ete
st
10minus1
10minus2
10minus3
100
102
103
104
105
E4
Ew
4E2
Ew
2
(a) Validation error during training
7850 7855 78602500
3000
3500
RIFLEX
Time (s)
E4
E2
Ew
4
Ew
2
(kN
)(b) Accuracy of simulation at peak
Figure 11 Comparison of performance for the four different error functions
2 4 6 8 10 12 140
05
1
15
2
Acc
tens
ion
RIFLEX
times108
E4
Ew
4E2
Ew
2
HS
Figure 12 Accumulated stress for each sea statemdashcalculated using rain flow counting
4 Conclusion
It has been shown how a relatively small and compactartificial neural network can be trained to performhigh speeddynamic simulation of tension forces in a mooring line ona floating platform Furthermore it has been shown that aproper selection of training data enables the ANN to covera wide range of different sea states even for sea states thatare not included directly in the training data In the examplepresented in this paper it is clear that weighting the errorfunction used to train an ANN in order to emphasize peakresponse does not improve the network performance withrespect to accuracy of fatigue calculations In fact the ANNappears to perform worse when trained with the weightederror function On the other hand it appears that increasingthe power of the error function from two to four provides aslight improvement to the performance of the trained ANN
However the idea of a weighted error function seems toreduce the ANN performance So apparently focusing onthe high amplitudes seems to deteriorate the low amplituderesponse more than it improves the response with largeamplitudes
As a conclusion the Minkowski error with 119877 = 4 is inter-esting for the mooring line example in more than one aspectIt provides more focus on the large amplitudes and improvesthe ANN slightly Furthermore the second derivative of the119864
4is fairly easy to determine which makes this objective
function suitable for several network optimizing schemessuch as Optimal Brain Damage (OBD) and Optimal BrainSurgeon (OBS) that are based on the second derivative ofthe error function Network optimization is however notconsidered further in this paper but will be subject of futurework
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Journal of Applied Mathematics
based on sufficiently known input patterns This means thata network trained on one type of load pattern will havedifficulties in predicting the response when the structure isexposed to different types of loading conditions Recentlya novel strategy for arranging the training data has beenproposed by Christiansen et al [9] where the idea is toselect small samples of simulated data for different sea statesand collect these in one training sequence With a properselection of data an ANN can be trained to predict tensionforces in a mooring line on a floating offshore platform fora large range of sea states two orders of magnitude fasterthan the corresponding direct time integration scheme Ithas been shown how computation time when conductingthe simulations associated with a full fatigue analysis on amooring line system on a floating offshore platform can bereduced from about 10 hours to less than 2 minutes
Training of an ANN corresponds to minimizing a pre-defined error function Several studies on the efficiency ofusing different objective functions for ANN training havebeen conducted over the last decades Accelerated learning inneural networks has been obtained by Solla et al [10] usingrelative entropy as the error measure and Hampshire andWaibel [11] presented an objective function called classifica-tion figure of merit (CFM) to improve phoneme recognitionin neural networks A comparative study performed byAltincay and Demirekler [12] showed that the CFM objectivefunction also improved the performance of neural networkstrained as classifiers for speaker identification Howeverthese references all consider networks used for classificationwhereas the one used in this paper is trained to performregression between time-continuous variables Hence theproblem studied in this paper calls for different cost functionsfrom those used for neural classifiers
This study evaluates and compares four different errorfunctions with respect to ANN performance on the fatiguelife analysis of the same floating offshore platform as usedin [9] A numerical model of a mooring line system on afloating platform subject to current wind and wave forcesis established The model is used to generate several 3-hourtime domain simulations at seven different sea states with2m 4m and 14m significant wave height respectivelyThe generated data is then divided into a training set and avalidation setThe training set consists of series of simulationsat only the sea states with 2m 8m and 14m wave heightThe remaining part is then left for validation of the trainedANN The full numerical time integration analysis is carriedout by the two tailor-made programs SIMO [13] and RIFLEX[14] while the neural network simulations are conducted bya small MATLAB toolbox
2 Artificial Neural Network
The artificial neural network is a pattern recognition toolthat replicates the ability of the human brain to recognizeand predict various kinds of patterns In the following anANNwill be trained to recognize and predict the relationshipbetween the motion of a floating platform and the resultingtension forces in a specific anchor chain
21 Setting Up the ANN The architecture of a typical onelayer artificial neural network is shown in Figure 1 The ANNconsists of an input layer where each input neuron representsa measured time discrete state of the system In the presentcase in Figure 1 the neurons of the input layer represent thesixmotion components (119909 119910 ) of the floating platform andthe previous time discrete anchor chain tension force (119879past)The input layer is connected to a single hidden layer which isthen connected to the output layer representing the tensionforce (119879) Two neurons in neighboring layers are connectedand each of these connections has a weightThe training of anANN corresponds to an optimization of these weights withrespect to a particular data training set The accuracy andefficiency of the network depend on the network architecturethe optimization of the individual weights and the choice oferror function used in the applied optimization procedure
The design and architecture of the ANN and the sub-sequent training procedure follow the approach outlined in[15] Assume that the vectors x y and z contain the neuronvariables of the input layer output layer and hidden layerrespectively The output layer and hidden layer values can becalculated by the expressions
y = W⊤119874z z = tanh (W⊤
119868x) 119909
0equiv 119911
0equiv 1 (1)
whereW119868andW
119874are arrays that contain the neuron connec-
tion weights between the input and the hidden layer and thehidden and the output layer respectively By setting 119909
0and 1199110
permanently to one biases in the data can be absorbed intothe input and hidden layer The tangent hyperbolic functionis used as an activation function between the input and thehidden layer A nonlinear activation function is needed inorder to introduce nonlinearities into the neural networkThe tangent hyperbolic is often used in networks of this typewhich represent a monotonic mapping between continuousvariables because it provides fast convergence in the networktraining procedure see [16]
The optimal weight components of the arrays W119868and
W119874are found by an iterative procedure where the weights
are modified to give a minimum with respect to a certainerror function The updating of the weight components isperformed by a classic gradient decent technique whichadjusts the weights in the opposite direction of the gradientof the error function [17] For the ANN this gradient decentupdating can be written as
Wnew = Wold + ΔW ΔW = minus120578
120597119864 (W)
120597W
(2)
where 119864 is a predefined error function and 120578 is the learningstep size parameter This parameter can either be constant orupdated during the training of the ANN For the applicationsin the present paper the learning step size parameter isdynamic and will be adjusted for each iteration so that itis increased if the training error is decreased compared toprevious iteration steps and reduced if the error increases
22 Error Functions As mentioned above in Section 21 thetraining of anANNcorresponds tominimizing the associatedmeasure of error represented by the predefined error function
Journal of Applied Mathematics 3
Surge (x)
Sway (y)
Heave (z)
Roll (120572)
Pitch (120573)
Yaw (120574)
T
WI
WO
BiasBias
Input
Hidden
Output
Tpast
Figure 1 Sketch of artificial neural network for predicting top tension force in mooring line
119864 The literature suggests many choices of error functions[16]
The simplest and most commonly used error functionin neural networks used for regression is the mean squareerror (MSE) However the purpose of the present ANN is tosignificantly reduce the calculation time for a fatigue analysisof the marine type structure And since the large amplitudestresses contribute far the most to the accumulated damageof the mooring lines it is of interest to investigate how adifferent choice of error measure will affect the accuracyand efficiency of the fatigue life calculations Four differenterror functions are therefore tested and compared to the fullnumerical solution obtained by time simulations using theRIFLEX code
The comparison is based on the so-called Minkowski-Rerror
119864 =
1
119877
sum
119899
119888
sum
119896=1
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877
(3)
where 119910 is the scalar ANN output and 119905 is the target valueThe classic MSE is seen to be a special case of the Minkowskierror with 119877 = 2 In many situations the performance andaccuracy of the ANN are equally important regardless ofthe magnitude of the actual output However when dealingwith analysis of structures this is not always the case Forexample the purpose of the ANN in the present paper isto simulate the top tension force time history in a mooringline which is subsequently used to evaluate the fatigue life ofthe line And since by far the most damage in the mooringline is introduced by large amplitude stress cycles the ANNinaccuracy on large stresses is much more expensive thanerrors on small and basically unimportant amplitudes Oneway to specifically emphasize large amplitudes is to increasethe 119877-value in (3) Another and more direct way to placeadditional focus on the importance of large stress amplitudesis by multiplying each term in (3) by the absolute value of
the target values This yields the following weighted errorfunction
119864
119908=
1
119877
sum
119899
119888
sum
119896=1
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877
sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
119877
(4)
The performance of a trained ANN is usually measuredin terms of the so-called validation error which is calculatedin the same way as the training error but on entirely newdata set that has not been part of the network training Thismeans that when comparing networks that have been trainedusing different error functions the validation error is thecommonmeasure to assess performance in terms of accuracyand computational effort Obviously the various networksconsidered in the following could be tested and comparedagainst any of the error functions that the networks havebeen trained by But that would definitely favor the particularANN that has been trained by the specific error function thatis chosen as the validation measure And since the ultimateobjective of the ANN is to predict the fatigue life of themooring line it is appropriate to calculate and compare theaccumulated damage in the mooring line caused by all sevensea state realizations previously mentioned in Section 1
23 Network Training In (2) the steepest decent correctionof the weight vector for the training of the network requiresthe first derivative of the error function 119864 with respect to theweight arrays W Differentiation of (3) with respect to thecomponents of the two weight matricesW
119868andW
119874yields
119889119864
119889119882
119874119896119895
= sum
119899
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
119911
119895
119889119864
119889119882
119868119895119894
= sum
119899
((1 minus 119911
2
119895)
119888
sum
119896=1
119908
119896119895
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
)119909
119894
(5)
These gradients are now inserted into (2) and therebygovern the correction of the weights for each iteration step in
4 Journal of Applied Mathematics
the training procedure Similar differentiation of theweightederror function (4) yields
119889119864
119908
119889119882
119874119896119895
= sum
119899
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
119911
119895sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
119889119864
119908
119889119882
119874119895119894
= sum
119899
((1 minus 119911
2
119895)
119888
sum
119896=1
119908
119896119895
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
)119909
119894sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
(6)
The above equations are implemented into the trainingalgorithm and tested for the two power values 119877 = 2 and 119877 =
4 This gives a total of four different error functions which inthe following will be denoted as
(i) 119864
2 unweighted error function with 119877 = 2
(ii) 1198641199082 weighted error function with 119877 = 2
(iii) 1198644 unweighted error function with 119877 = 4
(iv) 1198641199084 weighted error function with 119877 = 4
It should be noted that the first case 1198642represents the classic
MSE function
3 Application to Structural Model
The structure used as the basis for the comparison of thedifferent error functions is shown in Figure 2 It consists ofa floating offshore platform located at 105m water depthwhich is anchored by 18 mooring lines assembled in fourmain clusters The external forces acting on the structure areinduced by waves current and wind
31 Structural Model In principal the dynamic analysis ofthe platform-mooring system corresponds to solving theequation of motion
M (r) r + C (r) r + K (r) r = f (119905) (7)
In this nonlinear equation r contains the degrees offreedom of the structural model and f includes all externalforces acting on the structure from for example gravitybuoyancy and hydrodynamic effects while the nonconstantmatricesM C and K represent the system inertia dampingand stiffness respectively The system inertia matrixM takesinto account both the structural inertia and the responsedependent hydrodynamic added mass Linear and nonlinearenergy dissipation fromboth internal structural damping andhydrodynamic damping are accounted for by the dampingmatrix C Finally the stiffness matrix contains contributionsfrom both the elastic stiffness and the response dependentgeometric stiffness
The nonlinear equations of motion in (7) couple thestructural response of the floating platform and the responseof themooring lines However the system is effectively solvedby separating the solution procedure into the following steps
First the motion of the floating platform is computed bythe program SIMO [13] assuming a quasistatic catenarymooring line model with geometric nonlinearities The plat-form response from this initial analysis is subsequently usedas excitation in terms of prescribed platform motion in adetailed nonlinear finite element analysis for the specificmooring line with highest tension stresses The location ofthe mooring line with largest stresses is indicated in Figure 3For this specific line the hot-spot with respect to fatigue islocated close to the platform and is in the following referredto as the top tension force From the detailed fully nonlinearanalysis performed by RIFLEX the time history of the toptension force at this hot-spot is extracted
Based on the simulated time histories for both theplatformmotion and the top tension force an ANN is trainedto predict the top tension force in the selected mooring linewith the platform response variables as network input Thisis considered next in Section 32 In [9] a multilayer ANNwas trained to simulate the top tension force two orders ofmagnitude faster than a corresponding numericalmodelThetraining data was set up and arranged so that a single ANNwith a single hidden layer could simulate all fatigue relevantsea states and thereby provide a significant reduction in thecomputational effort associated with a fatigue life evaluationFor clarity the ANN used in this example covers only a fewsea states with different significant wave heights and constantpeak period This gives a compact neural network that isconveniently used to illustrate the influence of changing theerror function in the training of the network
32 Selection of Training Data The ultimate purpose of theANN is to completely bypass the computationally expensivenumerical time integration procedure which in this case isconducted by the RIFLEX model This means that the inputto the neural network must be identical to the input used forthe RIFLEX calculations In this case the input is therefore theplatform motion represented by the six degrees of freedomdenoted in Figure 1 and illustrated in Figure 2 In principlethe number of neural network output variables can be chosenfreely and in fact all degrees of freedom from the numericalfinite element analysis can be included as output variables inthe corresponding ANN However the strength of the ANNin this context is that it may provide only the output variablethat drives the design of the structure which in this case isthe maximum top tension forces in the particular mooringline This leads to a very fast simulation procedure which fora well-trained network provides sufficiently accurate resultsThus the ANN is in the present case designed and trained topredict the top tension of the mooring line and the platformmotion (six motion components surge sway heave rollpitch and yaw) is together with the top tension of previoustime steps used as input to the ANN see Figure 1 Thismeans that the input vector x
119899at time increment 119899 can be
constructed as
x119899= [ [119909
119905119909
119905minusℎsdot sdot sdot 119909
119905minus119889ℎ] [119910
119905119910
119905minusℎsdot sdot sdot 119910
119905minus119889ℎ]
[119911
119905119911
119905minusℎsdot sdot sdot 119911
119905minus119889ℎ] [120572
119905120572
119905minusℎsdot sdot sdot 120572
119905minus119889ℎ]
[120573
119905120573
119905minusℎsdot sdot sdot 120573
119905minus119889ℎ] [120574
119905120574
119905minusℎsdot sdot sdot 120574
119905minus119889ℎ]
[119879
119905minusℎ119879
119905minus2ℎsdot sdot sdot 119879
119905minus119889ℎ] ]
119879
(8)
Journal of Applied Mathematics 5
x
y
z 120572
120573
120574
Figure 2 Mooring system with floating platform and anchor lines
x
y
Traveling direction ofwaves current and wind
Mooring line with highest stress
Figure 3 Mooring line system (top view)
where 119905 = 119899ℎ denotes current time ℎ is the time incrementand 119889 is the number of previous time steps included in theinput that is the model memory The corresponding ANNoutput is the value of the top tension force 119879
119905in the mooring
line
119910
119899= 119879
119905 (9)
Since there is only one network output 119910 is a scalarand not a vector as in (1) For the training of the ANNnonlinear simulations in RIFLEX are conducted for sea stateswith a significant wave height of 119867
119904= 2m 8m and
14m respectively While neural networks are very good atinterpolatingwithin the training range they are typically onlyable to perform limited extrapolation outside the trainingrange Thus the selected training set must contain both theminimum wave height (2m) the maximum wave height(14m) and in this case a moderate wave height (8m) toprovide sufficient training data over the full range of interestWith these wave heights included in the training data theANN is expected to be able to provide accurate time historiesfor the top tension force for all intermediate wave heightsThe seven 3-hour simulation records generated by RIFLEX
are divided into a training set and a validation set The datathat is used for training of the ANN is shown in Figures 4and 5 Figure 4 shows the time histories for the six motiondegrees of freedom of the platform calculated by the initialanalysis in SIMO and used as input to both the full numericalanalysis in RIFLEX and the ANN training and simulationFigure 5 shows corresponding time histories for top tensionforce determined by RIFLEX The full time histories shownin Figures 4 and 5 are constructed of time series for thethree significant wave heights The first 830 seconds of thetraining set represent 2m significant wave height the next830 seconds are for 8m wave height and the remaining partis then for 14m wave height
The SIMO simulations are conductedwith a time step sizeof 05 s In the subsequent RIFLEX simulations the time stepmust be sufficiently small as to keep the associated Newton-Raphson iteration algorithm stable In these simulations atime step size of 01 s is therefore chosen which meansthat the additional input parameters are obtained by linearinterpolation between the simulation values from SIMOFor the ANN the time step size ℎ must be chosen so thatthe network is able to grasp the dynamic behavior of thestructure Therefore in many cases the ANN is capable of
6 Journal of Applied Mathematics
0 500 1000 1500 2000 2500minus50
050
minus200
20
minus202
minus505
minus505
05
10
Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
Surg
eSw
ayH
eave
Roll
Pitc
hYa
w
Figure 4 Platform motion used as ANN training input data
0 500 1000 1500 2000 25001000
2000
3000
Time (t)
Top
tens
ion
(kN
)
Figure 5 Tension force history used as ANN training target data
0 5 10 15 2015
2
25
3
35
Size of hidden layer
times10minus3
Ete
st
Figure 6 Validation error as function of number of neurons in the hidden layer
Journal of Applied Mathematics 7
handling fairly large time step increments compared to thecorresponding numerical models When using a larger timestep in the ANN simulations for example by omitting anumber of in-between data points it is possible to reducethe size of the training data set and thereby reduce thecomputational time used for ANN training and eventuallyalso for the ANN simulation In the example of this paper atime step size of ℎ = 20 s is found to yield a good balancebetween accuracy and computational efficiency and this timestep ℎ is therefore used for the ANN simulations
33 Design of ANN Architecture In the design of the ANNarchitecture three variables are investigated (1) number ofneurons in the hidden layer (2) size of the model memory119889 and (3) required amount of training data When theANN has been trained and ready for use the network sizehas no significant influence on the total simulation timeand computational effort The main time consumer is thetraining part and time used for training of the network highlydepends on network size and the size of the training data setHence it is of great interest to design an ANN architecturethat is as compact and effective as possible
Figure 6 shows a plot of the test error measure 119864testrelative to the number of neurons in the hidden layer of theANN In this section where the three basic ANN variablesare chosen the errormeasure is themean square error (MSE)corresponding to the 119864
2error measure in Section 23 The
curve in Figure 6 furthermore represents the mean value ofthe error based on five simulations while the vertical barsindicate the standard deviation It is seen from this curve thatthe performance and the scatter in performance of the trainedANN is lowest when the hidden layer contains four neuronsTherefore an effective and fairly accurate ANN performanceis expected when four neurons in the hidden layer are usedin the following simulations
Figure 7 shows the test error relative to the modelmemory 119889 which represents the number of previous timesteps used as network input First of all it is found thatincluding memory in the model significantly reduces theerror However it is also seen from the figure that anincrease of the memory beyond four previous steps impliesno significant improvement in the ANN performance Thusa four-step memory that is 119889 = 4 is used in the followingnumerical analyses
For the training of any ANN it is always crucial tohave a sufficient amount of training data in order to coverthe full range of the network and secure applicability withsufficient statistical significance Figure 8 shows the test erroras function of the length of the training data set As for theparameter studies in Figures 6 and 7 the present curve showsthe mean results based on five simulation records To makesure that a sufficient amount of data is used for the training ofthe ANN a total simulated record of 2500 s is included whichcorresponds to approximately a length of 14 minutes for eachof the three sea states It is seen in Figure 8 that this length ofthe simulation record is more than sufficient to secure a lowerror
The trained ANN is able to generate nonlinear outputwithout equilibrium iterations and hence at an often sig-nificantly higher computational pace compared to classicintegration procedures with embedded iteration schemesFigure 9 shows the simulation of the top tension force inthe mooring line calculated by the finite element methodin RIFLEX and by the trained ANN The four subfiguresin Figure 9 represent the four wave heights that were notpart of the ANN training that is 119867
119904= 4m 6m 10m
and 12m For these particular simulation records the trainedANN calculates a factor of about 600 times faster than theFEM calculations by RIFLEX
34 Comparison of Error Measures In the design of theANN architecture presented above the results are obtainedfor an ANN trained with the MSE as objective function orerror measure It is in the following conveniently assumedthat this ANN architecture is valid regardless of the specificchoice of error function Thus the various error measurespresented in Section 23 are in this section compared for theANN with four neurons in the hidden layer four memoryinput variables and a training length of 2500 s
As mentioned earlier some of the pregenerated data aresaved for performance validation of the trained ANN Thesedata are used to calculate a validation error 119864test whichis the measure for the accuracy of the trained networkFigure 10 shows the development in the validation errorduring the network training with all four different errorfunctions present in Section 23 It is clearly seen that all fourerror measures are minimized during training whereas it isdifficult to compare the detailed performance and efficiencyof the four different networks based on these curves
Figure 11(a) summarizes the development of the valida-tion error for the four ANN but this time the validation erroris calculated using the same MSE error measure (119864
2) to give
a consistent basis for comparison Thus the four networkshave been trained with four error measures respectivelywhile in Figure 11(a) they are compared by the MSE Eventhough the networks here are compared on common groundit is still difficult to evaluate how well they will performindividually on a full fatigue analysis Figure 11(b) illustratesthe accuracy of the four networks by showing a close upof a local maximum in the top tension force time historyIt is seen that the two unweighted error functions 119864
2and
119864
4 perform superiorly compared to the weighted functions
Also the unweighted error measures provide a smaller MSEerror in Figure 11(a)This indicates that weighting of the errorfunctions implies no improvement of the performance andaccuracy of the ANN
35 Rain FlowCount Themagnitude of the various test errormeasures is difficult to relate directly to the performance oftheANNcompared to the performance of theRIFLEXmodelSince these long time-domain simulations are often used infatigue life calculations an appropriate way to evaluate theaccuracy of the ANN is to compare the accumulated rain
8 Journal of Applied Mathematics
0 2 4 6 80
0002
0004
0006
0008
001
Number of time steps included in input
Ete
st
Figure 7 Validation error as function of model memory
0 500 1000 1500 2000 25000
002
004
006
008
01
012
Length of simulated training data (s)
Ete
st
Figure 8 Validation error as function of amount of training data
flow counts of the tension force cycles for each significantwave height In these fatigue analyses the RIFLEX resultsare considered as the exact solution For these calculationsthe full 3-hour simulations are used and Figure 12 shows theresults of the rain flow count of accumulated tension forcecycles for each of the significant wave heights Deviationsbetween RIFLEX and ANN simulations are listed in Table 1It should be noted that the deviations for the individual sevensea states do not add up to give the total deviation because theindividual sea states do not contribute equally to the overalldamage It is seen that the various networks perform verywellon all individual sea states and that the best networks therebyobtain a deviation of less than 2 for the accumulated tensionforce cycles when summing up the contributions from allsea states This deviation is a robust estimate and is likely toalso represent the accuracy of a full subsequent fatigue lifeevaluation
It is seen from the rain flow counting results in Figure 12and Table 1 that the neural networks trained with unweighted
Table 1 Deviations on accumulated tension force cycles
119867
119878119864
2119864
119908
2119864
4119864
119908
4
2 minus36 minus163 73 6584 minus69 minus130 48 3206 minus44 minus80 26 1498 minus27 minus48 minus15 8610 minus20 minus35 minus16 6912 minus14 minus31 minus17 5314 minus08 minus03 21 31Total minus17 minus38 16 64
error function in general perform slightly better than thosetrained with weighted error functionsThus placing a weighton the error function does not seem to have the desired effectin this application concerning the analysis of a mooring linesystem for a floating platform
Journal of Applied Mathematics 9
1000 1100 1200 1300 1400 15002100
2200
2300
2400
2500
2600
2700
1600 1700 1800 1900 2000 21002000
2200
2400
2600
2800
2200 2300 2400 2500 2600 27001000
1500
2000
2500
3000
2800 2900 3000 3100 3200 33001000
1500
2000
2500
3000
(kN
)(k
N)
(kN
)(k
N)
ANNRIFLEX
ANNRIFLEX
HS = 6HS = 4
HS = 10 HS = 12
Figure 9 Comparison of simulated top tension forces using ANN and RIFLEX for the four wave heights that were not part of the networktraining
Training iterations
Ete
st
10minus1
10minus2
10minus3
102
103
104
105
E2
Ew
2
(a)
Ete
st
10minus1
10minus2
10minus3
100
10minus4
102
103
104
105
Training iterations
E4
Ew
4
(b)
Figure 10 Validation error during training (a) Error functions with 119877 = 2 1198642 1198641199082 (b) Error functions with 119877 = 4 119864
4 1198641199084
10 Journal of Applied Mathematics
Training iterations
Ete
st
10minus1
10minus2
10minus3
100
102
103
104
105
E4
Ew
4E2
Ew
2
(a) Validation error during training
7850 7855 78602500
3000
3500
RIFLEX
Time (s)
E4
E2
Ew
4
Ew
2
(kN
)(b) Accuracy of simulation at peak
Figure 11 Comparison of performance for the four different error functions
2 4 6 8 10 12 140
05
1
15
2
Acc
tens
ion
RIFLEX
times108
E4
Ew
4E2
Ew
2
HS
Figure 12 Accumulated stress for each sea statemdashcalculated using rain flow counting
4 Conclusion
It has been shown how a relatively small and compactartificial neural network can be trained to performhigh speeddynamic simulation of tension forces in a mooring line ona floating platform Furthermore it has been shown that aproper selection of training data enables the ANN to covera wide range of different sea states even for sea states thatare not included directly in the training data In the examplepresented in this paper it is clear that weighting the errorfunction used to train an ANN in order to emphasize peakresponse does not improve the network performance withrespect to accuracy of fatigue calculations In fact the ANNappears to perform worse when trained with the weightederror function On the other hand it appears that increasingthe power of the error function from two to four provides aslight improvement to the performance of the trained ANN
However the idea of a weighted error function seems toreduce the ANN performance So apparently focusing onthe high amplitudes seems to deteriorate the low amplituderesponse more than it improves the response with largeamplitudes
As a conclusion the Minkowski error with 119877 = 4 is inter-esting for the mooring line example in more than one aspectIt provides more focus on the large amplitudes and improvesthe ANN slightly Furthermore the second derivative of the119864
4is fairly easy to determine which makes this objective
function suitable for several network optimizing schemessuch as Optimal Brain Damage (OBD) and Optimal BrainSurgeon (OBS) that are based on the second derivative ofthe error function Network optimization is however notconsidered further in this paper but will be subject of futurework
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Applied Mathematics 3
Surge (x)
Sway (y)
Heave (z)
Roll (120572)
Pitch (120573)
Yaw (120574)
T
WI
WO
BiasBias
Input
Hidden
Output
Tpast
Figure 1 Sketch of artificial neural network for predicting top tension force in mooring line
119864 The literature suggests many choices of error functions[16]
The simplest and most commonly used error functionin neural networks used for regression is the mean squareerror (MSE) However the purpose of the present ANN is tosignificantly reduce the calculation time for a fatigue analysisof the marine type structure And since the large amplitudestresses contribute far the most to the accumulated damageof the mooring lines it is of interest to investigate how adifferent choice of error measure will affect the accuracyand efficiency of the fatigue life calculations Four differenterror functions are therefore tested and compared to the fullnumerical solution obtained by time simulations using theRIFLEX code
The comparison is based on the so-called Minkowski-Rerror
119864 =
1
119877
sum
119899
119888
sum
119896=1
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877
(3)
where 119910 is the scalar ANN output and 119905 is the target valueThe classic MSE is seen to be a special case of the Minkowskierror with 119877 = 2 In many situations the performance andaccuracy of the ANN are equally important regardless ofthe magnitude of the actual output However when dealingwith analysis of structures this is not always the case Forexample the purpose of the ANN in the present paper isto simulate the top tension force time history in a mooringline which is subsequently used to evaluate the fatigue life ofthe line And since by far the most damage in the mooringline is introduced by large amplitude stress cycles the ANNinaccuracy on large stresses is much more expensive thanerrors on small and basically unimportant amplitudes Oneway to specifically emphasize large amplitudes is to increasethe 119877-value in (3) Another and more direct way to placeadditional focus on the importance of large stress amplitudesis by multiplying each term in (3) by the absolute value of
the target values This yields the following weighted errorfunction
119864
119908=
1
119877
sum
119899
119888
sum
119896=1
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877
sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
119877
(4)
The performance of a trained ANN is usually measuredin terms of the so-called validation error which is calculatedin the same way as the training error but on entirely newdata set that has not been part of the network training Thismeans that when comparing networks that have been trainedusing different error functions the validation error is thecommonmeasure to assess performance in terms of accuracyand computational effort Obviously the various networksconsidered in the following could be tested and comparedagainst any of the error functions that the networks havebeen trained by But that would definitely favor the particularANN that has been trained by the specific error function thatis chosen as the validation measure And since the ultimateobjective of the ANN is to predict the fatigue life of themooring line it is appropriate to calculate and compare theaccumulated damage in the mooring line caused by all sevensea state realizations previously mentioned in Section 1
23 Network Training In (2) the steepest decent correctionof the weight vector for the training of the network requiresthe first derivative of the error function 119864 with respect to theweight arrays W Differentiation of (3) with respect to thecomponents of the two weight matricesW
119868andW
119874yields
119889119864
119889119882
119874119896119895
= sum
119899
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
119911
119895
119889119864
119889119882
119868119895119894
= sum
119899
((1 minus 119911
2
119895)
119888
sum
119896=1
119908
119896119895
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
)119909
119894
(5)
These gradients are now inserted into (2) and therebygovern the correction of the weights for each iteration step in
4 Journal of Applied Mathematics
the training procedure Similar differentiation of theweightederror function (4) yields
119889119864
119908
119889119882
119874119896119895
= sum
119899
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
119911
119895sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
119889119864
119908
119889119882
119874119895119894
= sum
119899
((1 minus 119911
2
119895)
119888
sum
119896=1
119908
119896119895
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
)119909
119894sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
(6)
The above equations are implemented into the trainingalgorithm and tested for the two power values 119877 = 2 and 119877 =
4 This gives a total of four different error functions which inthe following will be denoted as
(i) 119864
2 unweighted error function with 119877 = 2
(ii) 1198641199082 weighted error function with 119877 = 2
(iii) 1198644 unweighted error function with 119877 = 4
(iv) 1198641199084 weighted error function with 119877 = 4
It should be noted that the first case 1198642represents the classic
MSE function
3 Application to Structural Model
The structure used as the basis for the comparison of thedifferent error functions is shown in Figure 2 It consists ofa floating offshore platform located at 105m water depthwhich is anchored by 18 mooring lines assembled in fourmain clusters The external forces acting on the structure areinduced by waves current and wind
31 Structural Model In principal the dynamic analysis ofthe platform-mooring system corresponds to solving theequation of motion
M (r) r + C (r) r + K (r) r = f (119905) (7)
In this nonlinear equation r contains the degrees offreedom of the structural model and f includes all externalforces acting on the structure from for example gravitybuoyancy and hydrodynamic effects while the nonconstantmatricesM C and K represent the system inertia dampingand stiffness respectively The system inertia matrixM takesinto account both the structural inertia and the responsedependent hydrodynamic added mass Linear and nonlinearenergy dissipation fromboth internal structural damping andhydrodynamic damping are accounted for by the dampingmatrix C Finally the stiffness matrix contains contributionsfrom both the elastic stiffness and the response dependentgeometric stiffness
The nonlinear equations of motion in (7) couple thestructural response of the floating platform and the responseof themooring lines However the system is effectively solvedby separating the solution procedure into the following steps
First the motion of the floating platform is computed bythe program SIMO [13] assuming a quasistatic catenarymooring line model with geometric nonlinearities The plat-form response from this initial analysis is subsequently usedas excitation in terms of prescribed platform motion in adetailed nonlinear finite element analysis for the specificmooring line with highest tension stresses The location ofthe mooring line with largest stresses is indicated in Figure 3For this specific line the hot-spot with respect to fatigue islocated close to the platform and is in the following referredto as the top tension force From the detailed fully nonlinearanalysis performed by RIFLEX the time history of the toptension force at this hot-spot is extracted
Based on the simulated time histories for both theplatformmotion and the top tension force an ANN is trainedto predict the top tension force in the selected mooring linewith the platform response variables as network input Thisis considered next in Section 32 In [9] a multilayer ANNwas trained to simulate the top tension force two orders ofmagnitude faster than a corresponding numericalmodelThetraining data was set up and arranged so that a single ANNwith a single hidden layer could simulate all fatigue relevantsea states and thereby provide a significant reduction in thecomputational effort associated with a fatigue life evaluationFor clarity the ANN used in this example covers only a fewsea states with different significant wave heights and constantpeak period This gives a compact neural network that isconveniently used to illustrate the influence of changing theerror function in the training of the network
32 Selection of Training Data The ultimate purpose of theANN is to completely bypass the computationally expensivenumerical time integration procedure which in this case isconducted by the RIFLEX model This means that the inputto the neural network must be identical to the input used forthe RIFLEX calculations In this case the input is therefore theplatform motion represented by the six degrees of freedomdenoted in Figure 1 and illustrated in Figure 2 In principlethe number of neural network output variables can be chosenfreely and in fact all degrees of freedom from the numericalfinite element analysis can be included as output variables inthe corresponding ANN However the strength of the ANNin this context is that it may provide only the output variablethat drives the design of the structure which in this case isthe maximum top tension forces in the particular mooringline This leads to a very fast simulation procedure which fora well-trained network provides sufficiently accurate resultsThus the ANN is in the present case designed and trained topredict the top tension of the mooring line and the platformmotion (six motion components surge sway heave rollpitch and yaw) is together with the top tension of previoustime steps used as input to the ANN see Figure 1 Thismeans that the input vector x
119899at time increment 119899 can be
constructed as
x119899= [ [119909
119905119909
119905minusℎsdot sdot sdot 119909
119905minus119889ℎ] [119910
119905119910
119905minusℎsdot sdot sdot 119910
119905minus119889ℎ]
[119911
119905119911
119905minusℎsdot sdot sdot 119911
119905minus119889ℎ] [120572
119905120572
119905minusℎsdot sdot sdot 120572
119905minus119889ℎ]
[120573
119905120573
119905minusℎsdot sdot sdot 120573
119905minus119889ℎ] [120574
119905120574
119905minusℎsdot sdot sdot 120574
119905minus119889ℎ]
[119879
119905minusℎ119879
119905minus2ℎsdot sdot sdot 119879
119905minus119889ℎ] ]
119879
(8)
Journal of Applied Mathematics 5
x
y
z 120572
120573
120574
Figure 2 Mooring system with floating platform and anchor lines
x
y
Traveling direction ofwaves current and wind
Mooring line with highest stress
Figure 3 Mooring line system (top view)
where 119905 = 119899ℎ denotes current time ℎ is the time incrementand 119889 is the number of previous time steps included in theinput that is the model memory The corresponding ANNoutput is the value of the top tension force 119879
119905in the mooring
line
119910
119899= 119879
119905 (9)
Since there is only one network output 119910 is a scalarand not a vector as in (1) For the training of the ANNnonlinear simulations in RIFLEX are conducted for sea stateswith a significant wave height of 119867
119904= 2m 8m and
14m respectively While neural networks are very good atinterpolatingwithin the training range they are typically onlyable to perform limited extrapolation outside the trainingrange Thus the selected training set must contain both theminimum wave height (2m) the maximum wave height(14m) and in this case a moderate wave height (8m) toprovide sufficient training data over the full range of interestWith these wave heights included in the training data theANN is expected to be able to provide accurate time historiesfor the top tension force for all intermediate wave heightsThe seven 3-hour simulation records generated by RIFLEX
are divided into a training set and a validation set The datathat is used for training of the ANN is shown in Figures 4and 5 Figure 4 shows the time histories for the six motiondegrees of freedom of the platform calculated by the initialanalysis in SIMO and used as input to both the full numericalanalysis in RIFLEX and the ANN training and simulationFigure 5 shows corresponding time histories for top tensionforce determined by RIFLEX The full time histories shownin Figures 4 and 5 are constructed of time series for thethree significant wave heights The first 830 seconds of thetraining set represent 2m significant wave height the next830 seconds are for 8m wave height and the remaining partis then for 14m wave height
The SIMO simulations are conductedwith a time step sizeof 05 s In the subsequent RIFLEX simulations the time stepmust be sufficiently small as to keep the associated Newton-Raphson iteration algorithm stable In these simulations atime step size of 01 s is therefore chosen which meansthat the additional input parameters are obtained by linearinterpolation between the simulation values from SIMOFor the ANN the time step size ℎ must be chosen so thatthe network is able to grasp the dynamic behavior of thestructure Therefore in many cases the ANN is capable of
6 Journal of Applied Mathematics
0 500 1000 1500 2000 2500minus50
050
minus200
20
minus202
minus505
minus505
05
10
Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
Surg
eSw
ayH
eave
Roll
Pitc
hYa
w
Figure 4 Platform motion used as ANN training input data
0 500 1000 1500 2000 25001000
2000
3000
Time (t)
Top
tens
ion
(kN
)
Figure 5 Tension force history used as ANN training target data
0 5 10 15 2015
2
25
3
35
Size of hidden layer
times10minus3
Ete
st
Figure 6 Validation error as function of number of neurons in the hidden layer
Journal of Applied Mathematics 7
handling fairly large time step increments compared to thecorresponding numerical models When using a larger timestep in the ANN simulations for example by omitting anumber of in-between data points it is possible to reducethe size of the training data set and thereby reduce thecomputational time used for ANN training and eventuallyalso for the ANN simulation In the example of this paper atime step size of ℎ = 20 s is found to yield a good balancebetween accuracy and computational efficiency and this timestep ℎ is therefore used for the ANN simulations
33 Design of ANN Architecture In the design of the ANNarchitecture three variables are investigated (1) number ofneurons in the hidden layer (2) size of the model memory119889 and (3) required amount of training data When theANN has been trained and ready for use the network sizehas no significant influence on the total simulation timeand computational effort The main time consumer is thetraining part and time used for training of the network highlydepends on network size and the size of the training data setHence it is of great interest to design an ANN architecturethat is as compact and effective as possible
Figure 6 shows a plot of the test error measure 119864testrelative to the number of neurons in the hidden layer of theANN In this section where the three basic ANN variablesare chosen the errormeasure is themean square error (MSE)corresponding to the 119864
2error measure in Section 23 The
curve in Figure 6 furthermore represents the mean value ofthe error based on five simulations while the vertical barsindicate the standard deviation It is seen from this curve thatthe performance and the scatter in performance of the trainedANN is lowest when the hidden layer contains four neuronsTherefore an effective and fairly accurate ANN performanceis expected when four neurons in the hidden layer are usedin the following simulations
Figure 7 shows the test error relative to the modelmemory 119889 which represents the number of previous timesteps used as network input First of all it is found thatincluding memory in the model significantly reduces theerror However it is also seen from the figure that anincrease of the memory beyond four previous steps impliesno significant improvement in the ANN performance Thusa four-step memory that is 119889 = 4 is used in the followingnumerical analyses
For the training of any ANN it is always crucial tohave a sufficient amount of training data in order to coverthe full range of the network and secure applicability withsufficient statistical significance Figure 8 shows the test erroras function of the length of the training data set As for theparameter studies in Figures 6 and 7 the present curve showsthe mean results based on five simulation records To makesure that a sufficient amount of data is used for the training ofthe ANN a total simulated record of 2500 s is included whichcorresponds to approximately a length of 14 minutes for eachof the three sea states It is seen in Figure 8 that this length ofthe simulation record is more than sufficient to secure a lowerror
The trained ANN is able to generate nonlinear outputwithout equilibrium iterations and hence at an often sig-nificantly higher computational pace compared to classicintegration procedures with embedded iteration schemesFigure 9 shows the simulation of the top tension force inthe mooring line calculated by the finite element methodin RIFLEX and by the trained ANN The four subfiguresin Figure 9 represent the four wave heights that were notpart of the ANN training that is 119867
119904= 4m 6m 10m
and 12m For these particular simulation records the trainedANN calculates a factor of about 600 times faster than theFEM calculations by RIFLEX
34 Comparison of Error Measures In the design of theANN architecture presented above the results are obtainedfor an ANN trained with the MSE as objective function orerror measure It is in the following conveniently assumedthat this ANN architecture is valid regardless of the specificchoice of error function Thus the various error measurespresented in Section 23 are in this section compared for theANN with four neurons in the hidden layer four memoryinput variables and a training length of 2500 s
As mentioned earlier some of the pregenerated data aresaved for performance validation of the trained ANN Thesedata are used to calculate a validation error 119864test whichis the measure for the accuracy of the trained networkFigure 10 shows the development in the validation errorduring the network training with all four different errorfunctions present in Section 23 It is clearly seen that all fourerror measures are minimized during training whereas it isdifficult to compare the detailed performance and efficiencyof the four different networks based on these curves
Figure 11(a) summarizes the development of the valida-tion error for the four ANN but this time the validation erroris calculated using the same MSE error measure (119864
2) to give
a consistent basis for comparison Thus the four networkshave been trained with four error measures respectivelywhile in Figure 11(a) they are compared by the MSE Eventhough the networks here are compared on common groundit is still difficult to evaluate how well they will performindividually on a full fatigue analysis Figure 11(b) illustratesthe accuracy of the four networks by showing a close upof a local maximum in the top tension force time historyIt is seen that the two unweighted error functions 119864
2and
119864
4 perform superiorly compared to the weighted functions
Also the unweighted error measures provide a smaller MSEerror in Figure 11(a)This indicates that weighting of the errorfunctions implies no improvement of the performance andaccuracy of the ANN
35 Rain FlowCount Themagnitude of the various test errormeasures is difficult to relate directly to the performance oftheANNcompared to the performance of theRIFLEXmodelSince these long time-domain simulations are often used infatigue life calculations an appropriate way to evaluate theaccuracy of the ANN is to compare the accumulated rain
8 Journal of Applied Mathematics
0 2 4 6 80
0002
0004
0006
0008
001
Number of time steps included in input
Ete
st
Figure 7 Validation error as function of model memory
0 500 1000 1500 2000 25000
002
004
006
008
01
012
Length of simulated training data (s)
Ete
st
Figure 8 Validation error as function of amount of training data
flow counts of the tension force cycles for each significantwave height In these fatigue analyses the RIFLEX resultsare considered as the exact solution For these calculationsthe full 3-hour simulations are used and Figure 12 shows theresults of the rain flow count of accumulated tension forcecycles for each of the significant wave heights Deviationsbetween RIFLEX and ANN simulations are listed in Table 1It should be noted that the deviations for the individual sevensea states do not add up to give the total deviation because theindividual sea states do not contribute equally to the overalldamage It is seen that the various networks perform verywellon all individual sea states and that the best networks therebyobtain a deviation of less than 2 for the accumulated tensionforce cycles when summing up the contributions from allsea states This deviation is a robust estimate and is likely toalso represent the accuracy of a full subsequent fatigue lifeevaluation
It is seen from the rain flow counting results in Figure 12and Table 1 that the neural networks trained with unweighted
Table 1 Deviations on accumulated tension force cycles
119867
119878119864
2119864
119908
2119864
4119864
119908
4
2 minus36 minus163 73 6584 minus69 minus130 48 3206 minus44 minus80 26 1498 minus27 minus48 minus15 8610 minus20 minus35 minus16 6912 minus14 minus31 minus17 5314 minus08 minus03 21 31Total minus17 minus38 16 64
error function in general perform slightly better than thosetrained with weighted error functionsThus placing a weighton the error function does not seem to have the desired effectin this application concerning the analysis of a mooring linesystem for a floating platform
Journal of Applied Mathematics 9
1000 1100 1200 1300 1400 15002100
2200
2300
2400
2500
2600
2700
1600 1700 1800 1900 2000 21002000
2200
2400
2600
2800
2200 2300 2400 2500 2600 27001000
1500
2000
2500
3000
2800 2900 3000 3100 3200 33001000
1500
2000
2500
3000
(kN
)(k
N)
(kN
)(k
N)
ANNRIFLEX
ANNRIFLEX
HS = 6HS = 4
HS = 10 HS = 12
Figure 9 Comparison of simulated top tension forces using ANN and RIFLEX for the four wave heights that were not part of the networktraining
Training iterations
Ete
st
10minus1
10minus2
10minus3
102
103
104
105
E2
Ew
2
(a)
Ete
st
10minus1
10minus2
10minus3
100
10minus4
102
103
104
105
Training iterations
E4
Ew
4
(b)
Figure 10 Validation error during training (a) Error functions with 119877 = 2 1198642 1198641199082 (b) Error functions with 119877 = 4 119864
4 1198641199084
10 Journal of Applied Mathematics
Training iterations
Ete
st
10minus1
10minus2
10minus3
100
102
103
104
105
E4
Ew
4E2
Ew
2
(a) Validation error during training
7850 7855 78602500
3000
3500
RIFLEX
Time (s)
E4
E2
Ew
4
Ew
2
(kN
)(b) Accuracy of simulation at peak
Figure 11 Comparison of performance for the four different error functions
2 4 6 8 10 12 140
05
1
15
2
Acc
tens
ion
RIFLEX
times108
E4
Ew
4E2
Ew
2
HS
Figure 12 Accumulated stress for each sea statemdashcalculated using rain flow counting
4 Conclusion
It has been shown how a relatively small and compactartificial neural network can be trained to performhigh speeddynamic simulation of tension forces in a mooring line ona floating platform Furthermore it has been shown that aproper selection of training data enables the ANN to covera wide range of different sea states even for sea states thatare not included directly in the training data In the examplepresented in this paper it is clear that weighting the errorfunction used to train an ANN in order to emphasize peakresponse does not improve the network performance withrespect to accuracy of fatigue calculations In fact the ANNappears to perform worse when trained with the weightederror function On the other hand it appears that increasingthe power of the error function from two to four provides aslight improvement to the performance of the trained ANN
However the idea of a weighted error function seems toreduce the ANN performance So apparently focusing onthe high amplitudes seems to deteriorate the low amplituderesponse more than it improves the response with largeamplitudes
As a conclusion the Minkowski error with 119877 = 4 is inter-esting for the mooring line example in more than one aspectIt provides more focus on the large amplitudes and improvesthe ANN slightly Furthermore the second derivative of the119864
4is fairly easy to determine which makes this objective
function suitable for several network optimizing schemessuch as Optimal Brain Damage (OBD) and Optimal BrainSurgeon (OBS) that are based on the second derivative ofthe error function Network optimization is however notconsidered further in this paper but will be subject of futurework
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Journal of Applied Mathematics
the training procedure Similar differentiation of theweightederror function (4) yields
119889119864
119908
119889119882
119874119896119895
= sum
119899
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
119911
119895sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
119889119864
119908
119889119882
119874119895119894
= sum
119899
((1 minus 119911
2
119895)
119888
sum
119896=1
119908
119896119895
1003816
1003816
1003816
1003816
119910
119896(x119899W) minus 119905
119896119899
1003816
1003816
1003816
1003816
119877minus1
)119909
119894sdot
1003816
1003816
1003816
1003816
119905
119896119899
1003816
1003816
1003816
1003816
(6)
The above equations are implemented into the trainingalgorithm and tested for the two power values 119877 = 2 and 119877 =
4 This gives a total of four different error functions which inthe following will be denoted as
(i) 119864
2 unweighted error function with 119877 = 2
(ii) 1198641199082 weighted error function with 119877 = 2
(iii) 1198644 unweighted error function with 119877 = 4
(iv) 1198641199084 weighted error function with 119877 = 4
It should be noted that the first case 1198642represents the classic
MSE function
3 Application to Structural Model
The structure used as the basis for the comparison of thedifferent error functions is shown in Figure 2 It consists ofa floating offshore platform located at 105m water depthwhich is anchored by 18 mooring lines assembled in fourmain clusters The external forces acting on the structure areinduced by waves current and wind
31 Structural Model In principal the dynamic analysis ofthe platform-mooring system corresponds to solving theequation of motion
M (r) r + C (r) r + K (r) r = f (119905) (7)
In this nonlinear equation r contains the degrees offreedom of the structural model and f includes all externalforces acting on the structure from for example gravitybuoyancy and hydrodynamic effects while the nonconstantmatricesM C and K represent the system inertia dampingand stiffness respectively The system inertia matrixM takesinto account both the structural inertia and the responsedependent hydrodynamic added mass Linear and nonlinearenergy dissipation fromboth internal structural damping andhydrodynamic damping are accounted for by the dampingmatrix C Finally the stiffness matrix contains contributionsfrom both the elastic stiffness and the response dependentgeometric stiffness
The nonlinear equations of motion in (7) couple thestructural response of the floating platform and the responseof themooring lines However the system is effectively solvedby separating the solution procedure into the following steps
First the motion of the floating platform is computed bythe program SIMO [13] assuming a quasistatic catenarymooring line model with geometric nonlinearities The plat-form response from this initial analysis is subsequently usedas excitation in terms of prescribed platform motion in adetailed nonlinear finite element analysis for the specificmooring line with highest tension stresses The location ofthe mooring line with largest stresses is indicated in Figure 3For this specific line the hot-spot with respect to fatigue islocated close to the platform and is in the following referredto as the top tension force From the detailed fully nonlinearanalysis performed by RIFLEX the time history of the toptension force at this hot-spot is extracted
Based on the simulated time histories for both theplatformmotion and the top tension force an ANN is trainedto predict the top tension force in the selected mooring linewith the platform response variables as network input Thisis considered next in Section 32 In [9] a multilayer ANNwas trained to simulate the top tension force two orders ofmagnitude faster than a corresponding numericalmodelThetraining data was set up and arranged so that a single ANNwith a single hidden layer could simulate all fatigue relevantsea states and thereby provide a significant reduction in thecomputational effort associated with a fatigue life evaluationFor clarity the ANN used in this example covers only a fewsea states with different significant wave heights and constantpeak period This gives a compact neural network that isconveniently used to illustrate the influence of changing theerror function in the training of the network
32 Selection of Training Data The ultimate purpose of theANN is to completely bypass the computationally expensivenumerical time integration procedure which in this case isconducted by the RIFLEX model This means that the inputto the neural network must be identical to the input used forthe RIFLEX calculations In this case the input is therefore theplatform motion represented by the six degrees of freedomdenoted in Figure 1 and illustrated in Figure 2 In principlethe number of neural network output variables can be chosenfreely and in fact all degrees of freedom from the numericalfinite element analysis can be included as output variables inthe corresponding ANN However the strength of the ANNin this context is that it may provide only the output variablethat drives the design of the structure which in this case isthe maximum top tension forces in the particular mooringline This leads to a very fast simulation procedure which fora well-trained network provides sufficiently accurate resultsThus the ANN is in the present case designed and trained topredict the top tension of the mooring line and the platformmotion (six motion components surge sway heave rollpitch and yaw) is together with the top tension of previoustime steps used as input to the ANN see Figure 1 Thismeans that the input vector x
119899at time increment 119899 can be
constructed as
x119899= [ [119909
119905119909
119905minusℎsdot sdot sdot 119909
119905minus119889ℎ] [119910
119905119910
119905minusℎsdot sdot sdot 119910
119905minus119889ℎ]
[119911
119905119911
119905minusℎsdot sdot sdot 119911
119905minus119889ℎ] [120572
119905120572
119905minusℎsdot sdot sdot 120572
119905minus119889ℎ]
[120573
119905120573
119905minusℎsdot sdot sdot 120573
119905minus119889ℎ] [120574
119905120574
119905minusℎsdot sdot sdot 120574
119905minus119889ℎ]
[119879
119905minusℎ119879
119905minus2ℎsdot sdot sdot 119879
119905minus119889ℎ] ]
119879
(8)
Journal of Applied Mathematics 5
x
y
z 120572
120573
120574
Figure 2 Mooring system with floating platform and anchor lines
x
y
Traveling direction ofwaves current and wind
Mooring line with highest stress
Figure 3 Mooring line system (top view)
where 119905 = 119899ℎ denotes current time ℎ is the time incrementand 119889 is the number of previous time steps included in theinput that is the model memory The corresponding ANNoutput is the value of the top tension force 119879
119905in the mooring
line
119910
119899= 119879
119905 (9)
Since there is only one network output 119910 is a scalarand not a vector as in (1) For the training of the ANNnonlinear simulations in RIFLEX are conducted for sea stateswith a significant wave height of 119867
119904= 2m 8m and
14m respectively While neural networks are very good atinterpolatingwithin the training range they are typically onlyable to perform limited extrapolation outside the trainingrange Thus the selected training set must contain both theminimum wave height (2m) the maximum wave height(14m) and in this case a moderate wave height (8m) toprovide sufficient training data over the full range of interestWith these wave heights included in the training data theANN is expected to be able to provide accurate time historiesfor the top tension force for all intermediate wave heightsThe seven 3-hour simulation records generated by RIFLEX
are divided into a training set and a validation set The datathat is used for training of the ANN is shown in Figures 4and 5 Figure 4 shows the time histories for the six motiondegrees of freedom of the platform calculated by the initialanalysis in SIMO and used as input to both the full numericalanalysis in RIFLEX and the ANN training and simulationFigure 5 shows corresponding time histories for top tensionforce determined by RIFLEX The full time histories shownin Figures 4 and 5 are constructed of time series for thethree significant wave heights The first 830 seconds of thetraining set represent 2m significant wave height the next830 seconds are for 8m wave height and the remaining partis then for 14m wave height
The SIMO simulations are conductedwith a time step sizeof 05 s In the subsequent RIFLEX simulations the time stepmust be sufficiently small as to keep the associated Newton-Raphson iteration algorithm stable In these simulations atime step size of 01 s is therefore chosen which meansthat the additional input parameters are obtained by linearinterpolation between the simulation values from SIMOFor the ANN the time step size ℎ must be chosen so thatthe network is able to grasp the dynamic behavior of thestructure Therefore in many cases the ANN is capable of
6 Journal of Applied Mathematics
0 500 1000 1500 2000 2500minus50
050
minus200
20
minus202
minus505
minus505
05
10
Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
Surg
eSw
ayH
eave
Roll
Pitc
hYa
w
Figure 4 Platform motion used as ANN training input data
0 500 1000 1500 2000 25001000
2000
3000
Time (t)
Top
tens
ion
(kN
)
Figure 5 Tension force history used as ANN training target data
0 5 10 15 2015
2
25
3
35
Size of hidden layer
times10minus3
Ete
st
Figure 6 Validation error as function of number of neurons in the hidden layer
Journal of Applied Mathematics 7
handling fairly large time step increments compared to thecorresponding numerical models When using a larger timestep in the ANN simulations for example by omitting anumber of in-between data points it is possible to reducethe size of the training data set and thereby reduce thecomputational time used for ANN training and eventuallyalso for the ANN simulation In the example of this paper atime step size of ℎ = 20 s is found to yield a good balancebetween accuracy and computational efficiency and this timestep ℎ is therefore used for the ANN simulations
33 Design of ANN Architecture In the design of the ANNarchitecture three variables are investigated (1) number ofneurons in the hidden layer (2) size of the model memory119889 and (3) required amount of training data When theANN has been trained and ready for use the network sizehas no significant influence on the total simulation timeand computational effort The main time consumer is thetraining part and time used for training of the network highlydepends on network size and the size of the training data setHence it is of great interest to design an ANN architecturethat is as compact and effective as possible
Figure 6 shows a plot of the test error measure 119864testrelative to the number of neurons in the hidden layer of theANN In this section where the three basic ANN variablesare chosen the errormeasure is themean square error (MSE)corresponding to the 119864
2error measure in Section 23 The
curve in Figure 6 furthermore represents the mean value ofthe error based on five simulations while the vertical barsindicate the standard deviation It is seen from this curve thatthe performance and the scatter in performance of the trainedANN is lowest when the hidden layer contains four neuronsTherefore an effective and fairly accurate ANN performanceis expected when four neurons in the hidden layer are usedin the following simulations
Figure 7 shows the test error relative to the modelmemory 119889 which represents the number of previous timesteps used as network input First of all it is found thatincluding memory in the model significantly reduces theerror However it is also seen from the figure that anincrease of the memory beyond four previous steps impliesno significant improvement in the ANN performance Thusa four-step memory that is 119889 = 4 is used in the followingnumerical analyses
For the training of any ANN it is always crucial tohave a sufficient amount of training data in order to coverthe full range of the network and secure applicability withsufficient statistical significance Figure 8 shows the test erroras function of the length of the training data set As for theparameter studies in Figures 6 and 7 the present curve showsthe mean results based on five simulation records To makesure that a sufficient amount of data is used for the training ofthe ANN a total simulated record of 2500 s is included whichcorresponds to approximately a length of 14 minutes for eachof the three sea states It is seen in Figure 8 that this length ofthe simulation record is more than sufficient to secure a lowerror
The trained ANN is able to generate nonlinear outputwithout equilibrium iterations and hence at an often sig-nificantly higher computational pace compared to classicintegration procedures with embedded iteration schemesFigure 9 shows the simulation of the top tension force inthe mooring line calculated by the finite element methodin RIFLEX and by the trained ANN The four subfiguresin Figure 9 represent the four wave heights that were notpart of the ANN training that is 119867
119904= 4m 6m 10m
and 12m For these particular simulation records the trainedANN calculates a factor of about 600 times faster than theFEM calculations by RIFLEX
34 Comparison of Error Measures In the design of theANN architecture presented above the results are obtainedfor an ANN trained with the MSE as objective function orerror measure It is in the following conveniently assumedthat this ANN architecture is valid regardless of the specificchoice of error function Thus the various error measurespresented in Section 23 are in this section compared for theANN with four neurons in the hidden layer four memoryinput variables and a training length of 2500 s
As mentioned earlier some of the pregenerated data aresaved for performance validation of the trained ANN Thesedata are used to calculate a validation error 119864test whichis the measure for the accuracy of the trained networkFigure 10 shows the development in the validation errorduring the network training with all four different errorfunctions present in Section 23 It is clearly seen that all fourerror measures are minimized during training whereas it isdifficult to compare the detailed performance and efficiencyof the four different networks based on these curves
Figure 11(a) summarizes the development of the valida-tion error for the four ANN but this time the validation erroris calculated using the same MSE error measure (119864
2) to give
a consistent basis for comparison Thus the four networkshave been trained with four error measures respectivelywhile in Figure 11(a) they are compared by the MSE Eventhough the networks here are compared on common groundit is still difficult to evaluate how well they will performindividually on a full fatigue analysis Figure 11(b) illustratesthe accuracy of the four networks by showing a close upof a local maximum in the top tension force time historyIt is seen that the two unweighted error functions 119864
2and
119864
4 perform superiorly compared to the weighted functions
Also the unweighted error measures provide a smaller MSEerror in Figure 11(a)This indicates that weighting of the errorfunctions implies no improvement of the performance andaccuracy of the ANN
35 Rain FlowCount Themagnitude of the various test errormeasures is difficult to relate directly to the performance oftheANNcompared to the performance of theRIFLEXmodelSince these long time-domain simulations are often used infatigue life calculations an appropriate way to evaluate theaccuracy of the ANN is to compare the accumulated rain
8 Journal of Applied Mathematics
0 2 4 6 80
0002
0004
0006
0008
001
Number of time steps included in input
Ete
st
Figure 7 Validation error as function of model memory
0 500 1000 1500 2000 25000
002
004
006
008
01
012
Length of simulated training data (s)
Ete
st
Figure 8 Validation error as function of amount of training data
flow counts of the tension force cycles for each significantwave height In these fatigue analyses the RIFLEX resultsare considered as the exact solution For these calculationsthe full 3-hour simulations are used and Figure 12 shows theresults of the rain flow count of accumulated tension forcecycles for each of the significant wave heights Deviationsbetween RIFLEX and ANN simulations are listed in Table 1It should be noted that the deviations for the individual sevensea states do not add up to give the total deviation because theindividual sea states do not contribute equally to the overalldamage It is seen that the various networks perform verywellon all individual sea states and that the best networks therebyobtain a deviation of less than 2 for the accumulated tensionforce cycles when summing up the contributions from allsea states This deviation is a robust estimate and is likely toalso represent the accuracy of a full subsequent fatigue lifeevaluation
It is seen from the rain flow counting results in Figure 12and Table 1 that the neural networks trained with unweighted
Table 1 Deviations on accumulated tension force cycles
119867
119878119864
2119864
119908
2119864
4119864
119908
4
2 minus36 minus163 73 6584 minus69 minus130 48 3206 minus44 minus80 26 1498 minus27 minus48 minus15 8610 minus20 minus35 minus16 6912 minus14 minus31 minus17 5314 minus08 minus03 21 31Total minus17 minus38 16 64
error function in general perform slightly better than thosetrained with weighted error functionsThus placing a weighton the error function does not seem to have the desired effectin this application concerning the analysis of a mooring linesystem for a floating platform
Journal of Applied Mathematics 9
1000 1100 1200 1300 1400 15002100
2200
2300
2400
2500
2600
2700
1600 1700 1800 1900 2000 21002000
2200
2400
2600
2800
2200 2300 2400 2500 2600 27001000
1500
2000
2500
3000
2800 2900 3000 3100 3200 33001000
1500
2000
2500
3000
(kN
)(k
N)
(kN
)(k
N)
ANNRIFLEX
ANNRIFLEX
HS = 6HS = 4
HS = 10 HS = 12
Figure 9 Comparison of simulated top tension forces using ANN and RIFLEX for the four wave heights that were not part of the networktraining
Training iterations
Ete
st
10minus1
10minus2
10minus3
102
103
104
105
E2
Ew
2
(a)
Ete
st
10minus1
10minus2
10minus3
100
10minus4
102
103
104
105
Training iterations
E4
Ew
4
(b)
Figure 10 Validation error during training (a) Error functions with 119877 = 2 1198642 1198641199082 (b) Error functions with 119877 = 4 119864
4 1198641199084
10 Journal of Applied Mathematics
Training iterations
Ete
st
10minus1
10minus2
10minus3
100
102
103
104
105
E4
Ew
4E2
Ew
2
(a) Validation error during training
7850 7855 78602500
3000
3500
RIFLEX
Time (s)
E4
E2
Ew
4
Ew
2
(kN
)(b) Accuracy of simulation at peak
Figure 11 Comparison of performance for the four different error functions
2 4 6 8 10 12 140
05
1
15
2
Acc
tens
ion
RIFLEX
times108
E4
Ew
4E2
Ew
2
HS
Figure 12 Accumulated stress for each sea statemdashcalculated using rain flow counting
4 Conclusion
It has been shown how a relatively small and compactartificial neural network can be trained to performhigh speeddynamic simulation of tension forces in a mooring line ona floating platform Furthermore it has been shown that aproper selection of training data enables the ANN to covera wide range of different sea states even for sea states thatare not included directly in the training data In the examplepresented in this paper it is clear that weighting the errorfunction used to train an ANN in order to emphasize peakresponse does not improve the network performance withrespect to accuracy of fatigue calculations In fact the ANNappears to perform worse when trained with the weightederror function On the other hand it appears that increasingthe power of the error function from two to four provides aslight improvement to the performance of the trained ANN
However the idea of a weighted error function seems toreduce the ANN performance So apparently focusing onthe high amplitudes seems to deteriorate the low amplituderesponse more than it improves the response with largeamplitudes
As a conclusion the Minkowski error with 119877 = 4 is inter-esting for the mooring line example in more than one aspectIt provides more focus on the large amplitudes and improvesthe ANN slightly Furthermore the second derivative of the119864
4is fairly easy to determine which makes this objective
function suitable for several network optimizing schemessuch as Optimal Brain Damage (OBD) and Optimal BrainSurgeon (OBS) that are based on the second derivative ofthe error function Network optimization is however notconsidered further in this paper but will be subject of futurework
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Applied Mathematics 5
x
y
z 120572
120573
120574
Figure 2 Mooring system with floating platform and anchor lines
x
y
Traveling direction ofwaves current and wind
Mooring line with highest stress
Figure 3 Mooring line system (top view)
where 119905 = 119899ℎ denotes current time ℎ is the time incrementand 119889 is the number of previous time steps included in theinput that is the model memory The corresponding ANNoutput is the value of the top tension force 119879
119905in the mooring
line
119910
119899= 119879
119905 (9)
Since there is only one network output 119910 is a scalarand not a vector as in (1) For the training of the ANNnonlinear simulations in RIFLEX are conducted for sea stateswith a significant wave height of 119867
119904= 2m 8m and
14m respectively While neural networks are very good atinterpolatingwithin the training range they are typically onlyable to perform limited extrapolation outside the trainingrange Thus the selected training set must contain both theminimum wave height (2m) the maximum wave height(14m) and in this case a moderate wave height (8m) toprovide sufficient training data over the full range of interestWith these wave heights included in the training data theANN is expected to be able to provide accurate time historiesfor the top tension force for all intermediate wave heightsThe seven 3-hour simulation records generated by RIFLEX
are divided into a training set and a validation set The datathat is used for training of the ANN is shown in Figures 4and 5 Figure 4 shows the time histories for the six motiondegrees of freedom of the platform calculated by the initialanalysis in SIMO and used as input to both the full numericalanalysis in RIFLEX and the ANN training and simulationFigure 5 shows corresponding time histories for top tensionforce determined by RIFLEX The full time histories shownin Figures 4 and 5 are constructed of time series for thethree significant wave heights The first 830 seconds of thetraining set represent 2m significant wave height the next830 seconds are for 8m wave height and the remaining partis then for 14m wave height
The SIMO simulations are conductedwith a time step sizeof 05 s In the subsequent RIFLEX simulations the time stepmust be sufficiently small as to keep the associated Newton-Raphson iteration algorithm stable In these simulations atime step size of 01 s is therefore chosen which meansthat the additional input parameters are obtained by linearinterpolation between the simulation values from SIMOFor the ANN the time step size ℎ must be chosen so thatthe network is able to grasp the dynamic behavior of thestructure Therefore in many cases the ANN is capable of
6 Journal of Applied Mathematics
0 500 1000 1500 2000 2500minus50
050
minus200
20
minus202
minus505
minus505
05
10
Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
Surg
eSw
ayH
eave
Roll
Pitc
hYa
w
Figure 4 Platform motion used as ANN training input data
0 500 1000 1500 2000 25001000
2000
3000
Time (t)
Top
tens
ion
(kN
)
Figure 5 Tension force history used as ANN training target data
0 5 10 15 2015
2
25
3
35
Size of hidden layer
times10minus3
Ete
st
Figure 6 Validation error as function of number of neurons in the hidden layer
Journal of Applied Mathematics 7
handling fairly large time step increments compared to thecorresponding numerical models When using a larger timestep in the ANN simulations for example by omitting anumber of in-between data points it is possible to reducethe size of the training data set and thereby reduce thecomputational time used for ANN training and eventuallyalso for the ANN simulation In the example of this paper atime step size of ℎ = 20 s is found to yield a good balancebetween accuracy and computational efficiency and this timestep ℎ is therefore used for the ANN simulations
33 Design of ANN Architecture In the design of the ANNarchitecture three variables are investigated (1) number ofneurons in the hidden layer (2) size of the model memory119889 and (3) required amount of training data When theANN has been trained and ready for use the network sizehas no significant influence on the total simulation timeand computational effort The main time consumer is thetraining part and time used for training of the network highlydepends on network size and the size of the training data setHence it is of great interest to design an ANN architecturethat is as compact and effective as possible
Figure 6 shows a plot of the test error measure 119864testrelative to the number of neurons in the hidden layer of theANN In this section where the three basic ANN variablesare chosen the errormeasure is themean square error (MSE)corresponding to the 119864
2error measure in Section 23 The
curve in Figure 6 furthermore represents the mean value ofthe error based on five simulations while the vertical barsindicate the standard deviation It is seen from this curve thatthe performance and the scatter in performance of the trainedANN is lowest when the hidden layer contains four neuronsTherefore an effective and fairly accurate ANN performanceis expected when four neurons in the hidden layer are usedin the following simulations
Figure 7 shows the test error relative to the modelmemory 119889 which represents the number of previous timesteps used as network input First of all it is found thatincluding memory in the model significantly reduces theerror However it is also seen from the figure that anincrease of the memory beyond four previous steps impliesno significant improvement in the ANN performance Thusa four-step memory that is 119889 = 4 is used in the followingnumerical analyses
For the training of any ANN it is always crucial tohave a sufficient amount of training data in order to coverthe full range of the network and secure applicability withsufficient statistical significance Figure 8 shows the test erroras function of the length of the training data set As for theparameter studies in Figures 6 and 7 the present curve showsthe mean results based on five simulation records To makesure that a sufficient amount of data is used for the training ofthe ANN a total simulated record of 2500 s is included whichcorresponds to approximately a length of 14 minutes for eachof the three sea states It is seen in Figure 8 that this length ofthe simulation record is more than sufficient to secure a lowerror
The trained ANN is able to generate nonlinear outputwithout equilibrium iterations and hence at an often sig-nificantly higher computational pace compared to classicintegration procedures with embedded iteration schemesFigure 9 shows the simulation of the top tension force inthe mooring line calculated by the finite element methodin RIFLEX and by the trained ANN The four subfiguresin Figure 9 represent the four wave heights that were notpart of the ANN training that is 119867
119904= 4m 6m 10m
and 12m For these particular simulation records the trainedANN calculates a factor of about 600 times faster than theFEM calculations by RIFLEX
34 Comparison of Error Measures In the design of theANN architecture presented above the results are obtainedfor an ANN trained with the MSE as objective function orerror measure It is in the following conveniently assumedthat this ANN architecture is valid regardless of the specificchoice of error function Thus the various error measurespresented in Section 23 are in this section compared for theANN with four neurons in the hidden layer four memoryinput variables and a training length of 2500 s
As mentioned earlier some of the pregenerated data aresaved for performance validation of the trained ANN Thesedata are used to calculate a validation error 119864test whichis the measure for the accuracy of the trained networkFigure 10 shows the development in the validation errorduring the network training with all four different errorfunctions present in Section 23 It is clearly seen that all fourerror measures are minimized during training whereas it isdifficult to compare the detailed performance and efficiencyof the four different networks based on these curves
Figure 11(a) summarizes the development of the valida-tion error for the four ANN but this time the validation erroris calculated using the same MSE error measure (119864
2) to give
a consistent basis for comparison Thus the four networkshave been trained with four error measures respectivelywhile in Figure 11(a) they are compared by the MSE Eventhough the networks here are compared on common groundit is still difficult to evaluate how well they will performindividually on a full fatigue analysis Figure 11(b) illustratesthe accuracy of the four networks by showing a close upof a local maximum in the top tension force time historyIt is seen that the two unweighted error functions 119864
2and
119864
4 perform superiorly compared to the weighted functions
Also the unweighted error measures provide a smaller MSEerror in Figure 11(a)This indicates that weighting of the errorfunctions implies no improvement of the performance andaccuracy of the ANN
35 Rain FlowCount Themagnitude of the various test errormeasures is difficult to relate directly to the performance oftheANNcompared to the performance of theRIFLEXmodelSince these long time-domain simulations are often used infatigue life calculations an appropriate way to evaluate theaccuracy of the ANN is to compare the accumulated rain
8 Journal of Applied Mathematics
0 2 4 6 80
0002
0004
0006
0008
001
Number of time steps included in input
Ete
st
Figure 7 Validation error as function of model memory
0 500 1000 1500 2000 25000
002
004
006
008
01
012
Length of simulated training data (s)
Ete
st
Figure 8 Validation error as function of amount of training data
flow counts of the tension force cycles for each significantwave height In these fatigue analyses the RIFLEX resultsare considered as the exact solution For these calculationsthe full 3-hour simulations are used and Figure 12 shows theresults of the rain flow count of accumulated tension forcecycles for each of the significant wave heights Deviationsbetween RIFLEX and ANN simulations are listed in Table 1It should be noted that the deviations for the individual sevensea states do not add up to give the total deviation because theindividual sea states do not contribute equally to the overalldamage It is seen that the various networks perform verywellon all individual sea states and that the best networks therebyobtain a deviation of less than 2 for the accumulated tensionforce cycles when summing up the contributions from allsea states This deviation is a robust estimate and is likely toalso represent the accuracy of a full subsequent fatigue lifeevaluation
It is seen from the rain flow counting results in Figure 12and Table 1 that the neural networks trained with unweighted
Table 1 Deviations on accumulated tension force cycles
119867
119878119864
2119864
119908
2119864
4119864
119908
4
2 minus36 minus163 73 6584 minus69 minus130 48 3206 minus44 minus80 26 1498 minus27 minus48 minus15 8610 minus20 minus35 minus16 6912 minus14 minus31 minus17 5314 minus08 minus03 21 31Total minus17 minus38 16 64
error function in general perform slightly better than thosetrained with weighted error functionsThus placing a weighton the error function does not seem to have the desired effectin this application concerning the analysis of a mooring linesystem for a floating platform
Journal of Applied Mathematics 9
1000 1100 1200 1300 1400 15002100
2200
2300
2400
2500
2600
2700
1600 1700 1800 1900 2000 21002000
2200
2400
2600
2800
2200 2300 2400 2500 2600 27001000
1500
2000
2500
3000
2800 2900 3000 3100 3200 33001000
1500
2000
2500
3000
(kN
)(k
N)
(kN
)(k
N)
ANNRIFLEX
ANNRIFLEX
HS = 6HS = 4
HS = 10 HS = 12
Figure 9 Comparison of simulated top tension forces using ANN and RIFLEX for the four wave heights that were not part of the networktraining
Training iterations
Ete
st
10minus1
10minus2
10minus3
102
103
104
105
E2
Ew
2
(a)
Ete
st
10minus1
10minus2
10minus3
100
10minus4
102
103
104
105
Training iterations
E4
Ew
4
(b)
Figure 10 Validation error during training (a) Error functions with 119877 = 2 1198642 1198641199082 (b) Error functions with 119877 = 4 119864
4 1198641199084
10 Journal of Applied Mathematics
Training iterations
Ete
st
10minus1
10minus2
10minus3
100
102
103
104
105
E4
Ew
4E2
Ew
2
(a) Validation error during training
7850 7855 78602500
3000
3500
RIFLEX
Time (s)
E4
E2
Ew
4
Ew
2
(kN
)(b) Accuracy of simulation at peak
Figure 11 Comparison of performance for the four different error functions
2 4 6 8 10 12 140
05
1
15
2
Acc
tens
ion
RIFLEX
times108
E4
Ew
4E2
Ew
2
HS
Figure 12 Accumulated stress for each sea statemdashcalculated using rain flow counting
4 Conclusion
It has been shown how a relatively small and compactartificial neural network can be trained to performhigh speeddynamic simulation of tension forces in a mooring line ona floating platform Furthermore it has been shown that aproper selection of training data enables the ANN to covera wide range of different sea states even for sea states thatare not included directly in the training data In the examplepresented in this paper it is clear that weighting the errorfunction used to train an ANN in order to emphasize peakresponse does not improve the network performance withrespect to accuracy of fatigue calculations In fact the ANNappears to perform worse when trained with the weightederror function On the other hand it appears that increasingthe power of the error function from two to four provides aslight improvement to the performance of the trained ANN
However the idea of a weighted error function seems toreduce the ANN performance So apparently focusing onthe high amplitudes seems to deteriorate the low amplituderesponse more than it improves the response with largeamplitudes
As a conclusion the Minkowski error with 119877 = 4 is inter-esting for the mooring line example in more than one aspectIt provides more focus on the large amplitudes and improvesthe ANN slightly Furthermore the second derivative of the119864
4is fairly easy to determine which makes this objective
function suitable for several network optimizing schemessuch as Optimal Brain Damage (OBD) and Optimal BrainSurgeon (OBS) that are based on the second derivative ofthe error function Network optimization is however notconsidered further in this paper but will be subject of futurework
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Journal of Applied Mathematics
0 500 1000 1500 2000 2500minus50
050
minus200
20
minus202
minus505
minus505
05
10
Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
0 500 1000 1500 2000 2500Time (t)
Surg
eSw
ayH
eave
Roll
Pitc
hYa
w
Figure 4 Platform motion used as ANN training input data
0 500 1000 1500 2000 25001000
2000
3000
Time (t)
Top
tens
ion
(kN
)
Figure 5 Tension force history used as ANN training target data
0 5 10 15 2015
2
25
3
35
Size of hidden layer
times10minus3
Ete
st
Figure 6 Validation error as function of number of neurons in the hidden layer
Journal of Applied Mathematics 7
handling fairly large time step increments compared to thecorresponding numerical models When using a larger timestep in the ANN simulations for example by omitting anumber of in-between data points it is possible to reducethe size of the training data set and thereby reduce thecomputational time used for ANN training and eventuallyalso for the ANN simulation In the example of this paper atime step size of ℎ = 20 s is found to yield a good balancebetween accuracy and computational efficiency and this timestep ℎ is therefore used for the ANN simulations
33 Design of ANN Architecture In the design of the ANNarchitecture three variables are investigated (1) number ofneurons in the hidden layer (2) size of the model memory119889 and (3) required amount of training data When theANN has been trained and ready for use the network sizehas no significant influence on the total simulation timeand computational effort The main time consumer is thetraining part and time used for training of the network highlydepends on network size and the size of the training data setHence it is of great interest to design an ANN architecturethat is as compact and effective as possible
Figure 6 shows a plot of the test error measure 119864testrelative to the number of neurons in the hidden layer of theANN In this section where the three basic ANN variablesare chosen the errormeasure is themean square error (MSE)corresponding to the 119864
2error measure in Section 23 The
curve in Figure 6 furthermore represents the mean value ofthe error based on five simulations while the vertical barsindicate the standard deviation It is seen from this curve thatthe performance and the scatter in performance of the trainedANN is lowest when the hidden layer contains four neuronsTherefore an effective and fairly accurate ANN performanceis expected when four neurons in the hidden layer are usedin the following simulations
Figure 7 shows the test error relative to the modelmemory 119889 which represents the number of previous timesteps used as network input First of all it is found thatincluding memory in the model significantly reduces theerror However it is also seen from the figure that anincrease of the memory beyond four previous steps impliesno significant improvement in the ANN performance Thusa four-step memory that is 119889 = 4 is used in the followingnumerical analyses
For the training of any ANN it is always crucial tohave a sufficient amount of training data in order to coverthe full range of the network and secure applicability withsufficient statistical significance Figure 8 shows the test erroras function of the length of the training data set As for theparameter studies in Figures 6 and 7 the present curve showsthe mean results based on five simulation records To makesure that a sufficient amount of data is used for the training ofthe ANN a total simulated record of 2500 s is included whichcorresponds to approximately a length of 14 minutes for eachof the three sea states It is seen in Figure 8 that this length ofthe simulation record is more than sufficient to secure a lowerror
The trained ANN is able to generate nonlinear outputwithout equilibrium iterations and hence at an often sig-nificantly higher computational pace compared to classicintegration procedures with embedded iteration schemesFigure 9 shows the simulation of the top tension force inthe mooring line calculated by the finite element methodin RIFLEX and by the trained ANN The four subfiguresin Figure 9 represent the four wave heights that were notpart of the ANN training that is 119867
119904= 4m 6m 10m
and 12m For these particular simulation records the trainedANN calculates a factor of about 600 times faster than theFEM calculations by RIFLEX
34 Comparison of Error Measures In the design of theANN architecture presented above the results are obtainedfor an ANN trained with the MSE as objective function orerror measure It is in the following conveniently assumedthat this ANN architecture is valid regardless of the specificchoice of error function Thus the various error measurespresented in Section 23 are in this section compared for theANN with four neurons in the hidden layer four memoryinput variables and a training length of 2500 s
As mentioned earlier some of the pregenerated data aresaved for performance validation of the trained ANN Thesedata are used to calculate a validation error 119864test whichis the measure for the accuracy of the trained networkFigure 10 shows the development in the validation errorduring the network training with all four different errorfunctions present in Section 23 It is clearly seen that all fourerror measures are minimized during training whereas it isdifficult to compare the detailed performance and efficiencyof the four different networks based on these curves
Figure 11(a) summarizes the development of the valida-tion error for the four ANN but this time the validation erroris calculated using the same MSE error measure (119864
2) to give
a consistent basis for comparison Thus the four networkshave been trained with four error measures respectivelywhile in Figure 11(a) they are compared by the MSE Eventhough the networks here are compared on common groundit is still difficult to evaluate how well they will performindividually on a full fatigue analysis Figure 11(b) illustratesthe accuracy of the four networks by showing a close upof a local maximum in the top tension force time historyIt is seen that the two unweighted error functions 119864
2and
119864
4 perform superiorly compared to the weighted functions
Also the unweighted error measures provide a smaller MSEerror in Figure 11(a)This indicates that weighting of the errorfunctions implies no improvement of the performance andaccuracy of the ANN
35 Rain FlowCount Themagnitude of the various test errormeasures is difficult to relate directly to the performance oftheANNcompared to the performance of theRIFLEXmodelSince these long time-domain simulations are often used infatigue life calculations an appropriate way to evaluate theaccuracy of the ANN is to compare the accumulated rain
8 Journal of Applied Mathematics
0 2 4 6 80
0002
0004
0006
0008
001
Number of time steps included in input
Ete
st
Figure 7 Validation error as function of model memory
0 500 1000 1500 2000 25000
002
004
006
008
01
012
Length of simulated training data (s)
Ete
st
Figure 8 Validation error as function of amount of training data
flow counts of the tension force cycles for each significantwave height In these fatigue analyses the RIFLEX resultsare considered as the exact solution For these calculationsthe full 3-hour simulations are used and Figure 12 shows theresults of the rain flow count of accumulated tension forcecycles for each of the significant wave heights Deviationsbetween RIFLEX and ANN simulations are listed in Table 1It should be noted that the deviations for the individual sevensea states do not add up to give the total deviation because theindividual sea states do not contribute equally to the overalldamage It is seen that the various networks perform verywellon all individual sea states and that the best networks therebyobtain a deviation of less than 2 for the accumulated tensionforce cycles when summing up the contributions from allsea states This deviation is a robust estimate and is likely toalso represent the accuracy of a full subsequent fatigue lifeevaluation
It is seen from the rain flow counting results in Figure 12and Table 1 that the neural networks trained with unweighted
Table 1 Deviations on accumulated tension force cycles
119867
119878119864
2119864
119908
2119864
4119864
119908
4
2 minus36 minus163 73 6584 minus69 minus130 48 3206 minus44 minus80 26 1498 minus27 minus48 minus15 8610 minus20 minus35 minus16 6912 minus14 minus31 minus17 5314 minus08 minus03 21 31Total minus17 minus38 16 64
error function in general perform slightly better than thosetrained with weighted error functionsThus placing a weighton the error function does not seem to have the desired effectin this application concerning the analysis of a mooring linesystem for a floating platform
Journal of Applied Mathematics 9
1000 1100 1200 1300 1400 15002100
2200
2300
2400
2500
2600
2700
1600 1700 1800 1900 2000 21002000
2200
2400
2600
2800
2200 2300 2400 2500 2600 27001000
1500
2000
2500
3000
2800 2900 3000 3100 3200 33001000
1500
2000
2500
3000
(kN
)(k
N)
(kN
)(k
N)
ANNRIFLEX
ANNRIFLEX
HS = 6HS = 4
HS = 10 HS = 12
Figure 9 Comparison of simulated top tension forces using ANN and RIFLEX for the four wave heights that were not part of the networktraining
Training iterations
Ete
st
10minus1
10minus2
10minus3
102
103
104
105
E2
Ew
2
(a)
Ete
st
10minus1
10minus2
10minus3
100
10minus4
102
103
104
105
Training iterations
E4
Ew
4
(b)
Figure 10 Validation error during training (a) Error functions with 119877 = 2 1198642 1198641199082 (b) Error functions with 119877 = 4 119864
4 1198641199084
10 Journal of Applied Mathematics
Training iterations
Ete
st
10minus1
10minus2
10minus3
100
102
103
104
105
E4
Ew
4E2
Ew
2
(a) Validation error during training
7850 7855 78602500
3000
3500
RIFLEX
Time (s)
E4
E2
Ew
4
Ew
2
(kN
)(b) Accuracy of simulation at peak
Figure 11 Comparison of performance for the four different error functions
2 4 6 8 10 12 140
05
1
15
2
Acc
tens
ion
RIFLEX
times108
E4
Ew
4E2
Ew
2
HS
Figure 12 Accumulated stress for each sea statemdashcalculated using rain flow counting
4 Conclusion
It has been shown how a relatively small and compactartificial neural network can be trained to performhigh speeddynamic simulation of tension forces in a mooring line ona floating platform Furthermore it has been shown that aproper selection of training data enables the ANN to covera wide range of different sea states even for sea states thatare not included directly in the training data In the examplepresented in this paper it is clear that weighting the errorfunction used to train an ANN in order to emphasize peakresponse does not improve the network performance withrespect to accuracy of fatigue calculations In fact the ANNappears to perform worse when trained with the weightederror function On the other hand it appears that increasingthe power of the error function from two to four provides aslight improvement to the performance of the trained ANN
However the idea of a weighted error function seems toreduce the ANN performance So apparently focusing onthe high amplitudes seems to deteriorate the low amplituderesponse more than it improves the response with largeamplitudes
As a conclusion the Minkowski error with 119877 = 4 is inter-esting for the mooring line example in more than one aspectIt provides more focus on the large amplitudes and improvesthe ANN slightly Furthermore the second derivative of the119864
4is fairly easy to determine which makes this objective
function suitable for several network optimizing schemessuch as Optimal Brain Damage (OBD) and Optimal BrainSurgeon (OBS) that are based on the second derivative ofthe error function Network optimization is however notconsidered further in this paper but will be subject of futurework
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Applied Mathematics 7
handling fairly large time step increments compared to thecorresponding numerical models When using a larger timestep in the ANN simulations for example by omitting anumber of in-between data points it is possible to reducethe size of the training data set and thereby reduce thecomputational time used for ANN training and eventuallyalso for the ANN simulation In the example of this paper atime step size of ℎ = 20 s is found to yield a good balancebetween accuracy and computational efficiency and this timestep ℎ is therefore used for the ANN simulations
33 Design of ANN Architecture In the design of the ANNarchitecture three variables are investigated (1) number ofneurons in the hidden layer (2) size of the model memory119889 and (3) required amount of training data When theANN has been trained and ready for use the network sizehas no significant influence on the total simulation timeand computational effort The main time consumer is thetraining part and time used for training of the network highlydepends on network size and the size of the training data setHence it is of great interest to design an ANN architecturethat is as compact and effective as possible
Figure 6 shows a plot of the test error measure 119864testrelative to the number of neurons in the hidden layer of theANN In this section where the three basic ANN variablesare chosen the errormeasure is themean square error (MSE)corresponding to the 119864
2error measure in Section 23 The
curve in Figure 6 furthermore represents the mean value ofthe error based on five simulations while the vertical barsindicate the standard deviation It is seen from this curve thatthe performance and the scatter in performance of the trainedANN is lowest when the hidden layer contains four neuronsTherefore an effective and fairly accurate ANN performanceis expected when four neurons in the hidden layer are usedin the following simulations
Figure 7 shows the test error relative to the modelmemory 119889 which represents the number of previous timesteps used as network input First of all it is found thatincluding memory in the model significantly reduces theerror However it is also seen from the figure that anincrease of the memory beyond four previous steps impliesno significant improvement in the ANN performance Thusa four-step memory that is 119889 = 4 is used in the followingnumerical analyses
For the training of any ANN it is always crucial tohave a sufficient amount of training data in order to coverthe full range of the network and secure applicability withsufficient statistical significance Figure 8 shows the test erroras function of the length of the training data set As for theparameter studies in Figures 6 and 7 the present curve showsthe mean results based on five simulation records To makesure that a sufficient amount of data is used for the training ofthe ANN a total simulated record of 2500 s is included whichcorresponds to approximately a length of 14 minutes for eachof the three sea states It is seen in Figure 8 that this length ofthe simulation record is more than sufficient to secure a lowerror
The trained ANN is able to generate nonlinear outputwithout equilibrium iterations and hence at an often sig-nificantly higher computational pace compared to classicintegration procedures with embedded iteration schemesFigure 9 shows the simulation of the top tension force inthe mooring line calculated by the finite element methodin RIFLEX and by the trained ANN The four subfiguresin Figure 9 represent the four wave heights that were notpart of the ANN training that is 119867
119904= 4m 6m 10m
and 12m For these particular simulation records the trainedANN calculates a factor of about 600 times faster than theFEM calculations by RIFLEX
34 Comparison of Error Measures In the design of theANN architecture presented above the results are obtainedfor an ANN trained with the MSE as objective function orerror measure It is in the following conveniently assumedthat this ANN architecture is valid regardless of the specificchoice of error function Thus the various error measurespresented in Section 23 are in this section compared for theANN with four neurons in the hidden layer four memoryinput variables and a training length of 2500 s
As mentioned earlier some of the pregenerated data aresaved for performance validation of the trained ANN Thesedata are used to calculate a validation error 119864test whichis the measure for the accuracy of the trained networkFigure 10 shows the development in the validation errorduring the network training with all four different errorfunctions present in Section 23 It is clearly seen that all fourerror measures are minimized during training whereas it isdifficult to compare the detailed performance and efficiencyof the four different networks based on these curves
Figure 11(a) summarizes the development of the valida-tion error for the four ANN but this time the validation erroris calculated using the same MSE error measure (119864
2) to give
a consistent basis for comparison Thus the four networkshave been trained with four error measures respectivelywhile in Figure 11(a) they are compared by the MSE Eventhough the networks here are compared on common groundit is still difficult to evaluate how well they will performindividually on a full fatigue analysis Figure 11(b) illustratesthe accuracy of the four networks by showing a close upof a local maximum in the top tension force time historyIt is seen that the two unweighted error functions 119864
2and
119864
4 perform superiorly compared to the weighted functions
Also the unweighted error measures provide a smaller MSEerror in Figure 11(a)This indicates that weighting of the errorfunctions implies no improvement of the performance andaccuracy of the ANN
35 Rain FlowCount Themagnitude of the various test errormeasures is difficult to relate directly to the performance oftheANNcompared to the performance of theRIFLEXmodelSince these long time-domain simulations are often used infatigue life calculations an appropriate way to evaluate theaccuracy of the ANN is to compare the accumulated rain
8 Journal of Applied Mathematics
0 2 4 6 80
0002
0004
0006
0008
001
Number of time steps included in input
Ete
st
Figure 7 Validation error as function of model memory
0 500 1000 1500 2000 25000
002
004
006
008
01
012
Length of simulated training data (s)
Ete
st
Figure 8 Validation error as function of amount of training data
flow counts of the tension force cycles for each significantwave height In these fatigue analyses the RIFLEX resultsare considered as the exact solution For these calculationsthe full 3-hour simulations are used and Figure 12 shows theresults of the rain flow count of accumulated tension forcecycles for each of the significant wave heights Deviationsbetween RIFLEX and ANN simulations are listed in Table 1It should be noted that the deviations for the individual sevensea states do not add up to give the total deviation because theindividual sea states do not contribute equally to the overalldamage It is seen that the various networks perform verywellon all individual sea states and that the best networks therebyobtain a deviation of less than 2 for the accumulated tensionforce cycles when summing up the contributions from allsea states This deviation is a robust estimate and is likely toalso represent the accuracy of a full subsequent fatigue lifeevaluation
It is seen from the rain flow counting results in Figure 12and Table 1 that the neural networks trained with unweighted
Table 1 Deviations on accumulated tension force cycles
119867
119878119864
2119864
119908
2119864
4119864
119908
4
2 minus36 minus163 73 6584 minus69 minus130 48 3206 minus44 minus80 26 1498 minus27 minus48 minus15 8610 minus20 minus35 minus16 6912 minus14 minus31 minus17 5314 minus08 minus03 21 31Total minus17 minus38 16 64
error function in general perform slightly better than thosetrained with weighted error functionsThus placing a weighton the error function does not seem to have the desired effectin this application concerning the analysis of a mooring linesystem for a floating platform
Journal of Applied Mathematics 9
1000 1100 1200 1300 1400 15002100
2200
2300
2400
2500
2600
2700
1600 1700 1800 1900 2000 21002000
2200
2400
2600
2800
2200 2300 2400 2500 2600 27001000
1500
2000
2500
3000
2800 2900 3000 3100 3200 33001000
1500
2000
2500
3000
(kN
)(k
N)
(kN
)(k
N)
ANNRIFLEX
ANNRIFLEX
HS = 6HS = 4
HS = 10 HS = 12
Figure 9 Comparison of simulated top tension forces using ANN and RIFLEX for the four wave heights that were not part of the networktraining
Training iterations
Ete
st
10minus1
10minus2
10minus3
102
103
104
105
E2
Ew
2
(a)
Ete
st
10minus1
10minus2
10minus3
100
10minus4
102
103
104
105
Training iterations
E4
Ew
4
(b)
Figure 10 Validation error during training (a) Error functions with 119877 = 2 1198642 1198641199082 (b) Error functions with 119877 = 4 119864
4 1198641199084
10 Journal of Applied Mathematics
Training iterations
Ete
st
10minus1
10minus2
10minus3
100
102
103
104
105
E4
Ew
4E2
Ew
2
(a) Validation error during training
7850 7855 78602500
3000
3500
RIFLEX
Time (s)
E4
E2
Ew
4
Ew
2
(kN
)(b) Accuracy of simulation at peak
Figure 11 Comparison of performance for the four different error functions
2 4 6 8 10 12 140
05
1
15
2
Acc
tens
ion
RIFLEX
times108
E4
Ew
4E2
Ew
2
HS
Figure 12 Accumulated stress for each sea statemdashcalculated using rain flow counting
4 Conclusion
It has been shown how a relatively small and compactartificial neural network can be trained to performhigh speeddynamic simulation of tension forces in a mooring line ona floating platform Furthermore it has been shown that aproper selection of training data enables the ANN to covera wide range of different sea states even for sea states thatare not included directly in the training data In the examplepresented in this paper it is clear that weighting the errorfunction used to train an ANN in order to emphasize peakresponse does not improve the network performance withrespect to accuracy of fatigue calculations In fact the ANNappears to perform worse when trained with the weightederror function On the other hand it appears that increasingthe power of the error function from two to four provides aslight improvement to the performance of the trained ANN
However the idea of a weighted error function seems toreduce the ANN performance So apparently focusing onthe high amplitudes seems to deteriorate the low amplituderesponse more than it improves the response with largeamplitudes
As a conclusion the Minkowski error with 119877 = 4 is inter-esting for the mooring line example in more than one aspectIt provides more focus on the large amplitudes and improvesthe ANN slightly Furthermore the second derivative of the119864
4is fairly easy to determine which makes this objective
function suitable for several network optimizing schemessuch as Optimal Brain Damage (OBD) and Optimal BrainSurgeon (OBS) that are based on the second derivative ofthe error function Network optimization is however notconsidered further in this paper but will be subject of futurework
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Journal of Applied Mathematics
0 2 4 6 80
0002
0004
0006
0008
001
Number of time steps included in input
Ete
st
Figure 7 Validation error as function of model memory
0 500 1000 1500 2000 25000
002
004
006
008
01
012
Length of simulated training data (s)
Ete
st
Figure 8 Validation error as function of amount of training data
flow counts of the tension force cycles for each significantwave height In these fatigue analyses the RIFLEX resultsare considered as the exact solution For these calculationsthe full 3-hour simulations are used and Figure 12 shows theresults of the rain flow count of accumulated tension forcecycles for each of the significant wave heights Deviationsbetween RIFLEX and ANN simulations are listed in Table 1It should be noted that the deviations for the individual sevensea states do not add up to give the total deviation because theindividual sea states do not contribute equally to the overalldamage It is seen that the various networks perform verywellon all individual sea states and that the best networks therebyobtain a deviation of less than 2 for the accumulated tensionforce cycles when summing up the contributions from allsea states This deviation is a robust estimate and is likely toalso represent the accuracy of a full subsequent fatigue lifeevaluation
It is seen from the rain flow counting results in Figure 12and Table 1 that the neural networks trained with unweighted
Table 1 Deviations on accumulated tension force cycles
119867
119878119864
2119864
119908
2119864
4119864
119908
4
2 minus36 minus163 73 6584 minus69 minus130 48 3206 minus44 minus80 26 1498 minus27 minus48 minus15 8610 minus20 minus35 minus16 6912 minus14 minus31 minus17 5314 minus08 minus03 21 31Total minus17 minus38 16 64
error function in general perform slightly better than thosetrained with weighted error functionsThus placing a weighton the error function does not seem to have the desired effectin this application concerning the analysis of a mooring linesystem for a floating platform
Journal of Applied Mathematics 9
1000 1100 1200 1300 1400 15002100
2200
2300
2400
2500
2600
2700
1600 1700 1800 1900 2000 21002000
2200
2400
2600
2800
2200 2300 2400 2500 2600 27001000
1500
2000
2500
3000
2800 2900 3000 3100 3200 33001000
1500
2000
2500
3000
(kN
)(k
N)
(kN
)(k
N)
ANNRIFLEX
ANNRIFLEX
HS = 6HS = 4
HS = 10 HS = 12
Figure 9 Comparison of simulated top tension forces using ANN and RIFLEX for the four wave heights that were not part of the networktraining
Training iterations
Ete
st
10minus1
10minus2
10minus3
102
103
104
105
E2
Ew
2
(a)
Ete
st
10minus1
10minus2
10minus3
100
10minus4
102
103
104
105
Training iterations
E4
Ew
4
(b)
Figure 10 Validation error during training (a) Error functions with 119877 = 2 1198642 1198641199082 (b) Error functions with 119877 = 4 119864
4 1198641199084
10 Journal of Applied Mathematics
Training iterations
Ete
st
10minus1
10minus2
10minus3
100
102
103
104
105
E4
Ew
4E2
Ew
2
(a) Validation error during training
7850 7855 78602500
3000
3500
RIFLEX
Time (s)
E4
E2
Ew
4
Ew
2
(kN
)(b) Accuracy of simulation at peak
Figure 11 Comparison of performance for the four different error functions
2 4 6 8 10 12 140
05
1
15
2
Acc
tens
ion
RIFLEX
times108
E4
Ew
4E2
Ew
2
HS
Figure 12 Accumulated stress for each sea statemdashcalculated using rain flow counting
4 Conclusion
It has been shown how a relatively small and compactartificial neural network can be trained to performhigh speeddynamic simulation of tension forces in a mooring line ona floating platform Furthermore it has been shown that aproper selection of training data enables the ANN to covera wide range of different sea states even for sea states thatare not included directly in the training data In the examplepresented in this paper it is clear that weighting the errorfunction used to train an ANN in order to emphasize peakresponse does not improve the network performance withrespect to accuracy of fatigue calculations In fact the ANNappears to perform worse when trained with the weightederror function On the other hand it appears that increasingthe power of the error function from two to four provides aslight improvement to the performance of the trained ANN
However the idea of a weighted error function seems toreduce the ANN performance So apparently focusing onthe high amplitudes seems to deteriorate the low amplituderesponse more than it improves the response with largeamplitudes
As a conclusion the Minkowski error with 119877 = 4 is inter-esting for the mooring line example in more than one aspectIt provides more focus on the large amplitudes and improvesthe ANN slightly Furthermore the second derivative of the119864
4is fairly easy to determine which makes this objective
function suitable for several network optimizing schemessuch as Optimal Brain Damage (OBD) and Optimal BrainSurgeon (OBS) that are based on the second derivative ofthe error function Network optimization is however notconsidered further in this paper but will be subject of futurework
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Applied Mathematics 9
1000 1100 1200 1300 1400 15002100
2200
2300
2400
2500
2600
2700
1600 1700 1800 1900 2000 21002000
2200
2400
2600
2800
2200 2300 2400 2500 2600 27001000
1500
2000
2500
3000
2800 2900 3000 3100 3200 33001000
1500
2000
2500
3000
(kN
)(k
N)
(kN
)(k
N)
ANNRIFLEX
ANNRIFLEX
HS = 6HS = 4
HS = 10 HS = 12
Figure 9 Comparison of simulated top tension forces using ANN and RIFLEX for the four wave heights that were not part of the networktraining
Training iterations
Ete
st
10minus1
10minus2
10minus3
102
103
104
105
E2
Ew
2
(a)
Ete
st
10minus1
10minus2
10minus3
100
10minus4
102
103
104
105
Training iterations
E4
Ew
4
(b)
Figure 10 Validation error during training (a) Error functions with 119877 = 2 1198642 1198641199082 (b) Error functions with 119877 = 4 119864
4 1198641199084
10 Journal of Applied Mathematics
Training iterations
Ete
st
10minus1
10minus2
10minus3
100
102
103
104
105
E4
Ew
4E2
Ew
2
(a) Validation error during training
7850 7855 78602500
3000
3500
RIFLEX
Time (s)
E4
E2
Ew
4
Ew
2
(kN
)(b) Accuracy of simulation at peak
Figure 11 Comparison of performance for the four different error functions
2 4 6 8 10 12 140
05
1
15
2
Acc
tens
ion
RIFLEX
times108
E4
Ew
4E2
Ew
2
HS
Figure 12 Accumulated stress for each sea statemdashcalculated using rain flow counting
4 Conclusion
It has been shown how a relatively small and compactartificial neural network can be trained to performhigh speeddynamic simulation of tension forces in a mooring line ona floating platform Furthermore it has been shown that aproper selection of training data enables the ANN to covera wide range of different sea states even for sea states thatare not included directly in the training data In the examplepresented in this paper it is clear that weighting the errorfunction used to train an ANN in order to emphasize peakresponse does not improve the network performance withrespect to accuracy of fatigue calculations In fact the ANNappears to perform worse when trained with the weightederror function On the other hand it appears that increasingthe power of the error function from two to four provides aslight improvement to the performance of the trained ANN
However the idea of a weighted error function seems toreduce the ANN performance So apparently focusing onthe high amplitudes seems to deteriorate the low amplituderesponse more than it improves the response with largeamplitudes
As a conclusion the Minkowski error with 119877 = 4 is inter-esting for the mooring line example in more than one aspectIt provides more focus on the large amplitudes and improvesthe ANN slightly Furthermore the second derivative of the119864
4is fairly easy to determine which makes this objective
function suitable for several network optimizing schemessuch as Optimal Brain Damage (OBD) and Optimal BrainSurgeon (OBS) that are based on the second derivative ofthe error function Network optimization is however notconsidered further in this paper but will be subject of futurework
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
10 Journal of Applied Mathematics
Training iterations
Ete
st
10minus1
10minus2
10minus3
100
102
103
104
105
E4
Ew
4E2
Ew
2
(a) Validation error during training
7850 7855 78602500
3000
3500
RIFLEX
Time (s)
E4
E2
Ew
4
Ew
2
(kN
)(b) Accuracy of simulation at peak
Figure 11 Comparison of performance for the four different error functions
2 4 6 8 10 12 140
05
1
15
2
Acc
tens
ion
RIFLEX
times108
E4
Ew
4E2
Ew
2
HS
Figure 12 Accumulated stress for each sea statemdashcalculated using rain flow counting
4 Conclusion
It has been shown how a relatively small and compactartificial neural network can be trained to performhigh speeddynamic simulation of tension forces in a mooring line ona floating platform Furthermore it has been shown that aproper selection of training data enables the ANN to covera wide range of different sea states even for sea states thatare not included directly in the training data In the examplepresented in this paper it is clear that weighting the errorfunction used to train an ANN in order to emphasize peakresponse does not improve the network performance withrespect to accuracy of fatigue calculations In fact the ANNappears to perform worse when trained with the weightederror function On the other hand it appears that increasingthe power of the error function from two to four provides aslight improvement to the performance of the trained ANN
However the idea of a weighted error function seems toreduce the ANN performance So apparently focusing onthe high amplitudes seems to deteriorate the low amplituderesponse more than it improves the response with largeamplitudes
As a conclusion the Minkowski error with 119877 = 4 is inter-esting for the mooring line example in more than one aspectIt provides more focus on the large amplitudes and improvesthe ANN slightly Furthermore the second derivative of the119864
4is fairly easy to determine which makes this objective
function suitable for several network optimizing schemessuch as Optimal Brain Damage (OBD) and Optimal BrainSurgeon (OBS) that are based on the second derivative ofthe error function Network optimization is however notconsidered further in this paper but will be subject of futurework
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Applied Mathematics 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
References
[1] DNV ldquoOffshore Standard DNV-OS-F201mdashDynamic RisersrdquoDet Norske Veritas October 2010
[2] DNV ldquoRecommended Practice DNV-RP-F204mdashRiser FatiguerdquoDet Norske Veritas July 2005
[3] M H Patel and F B Seyed ldquoReview of flexible riser modellingand analysis techniquesrdquo Engineering Structures vol 17 no 4pp 293ndash304 1995
[4] H Adeli ldquoNeural networks in civil engineering 1989ndash2000rdquoComputer-AidedCivil and Infrastructure Engineering vol 16 no2 pp 126ndash142 2001
[5] B Warner and M Misra ldquoUnderstanding neural networks asstatistical toolsrdquo The American Statistician vol 50 no 4 pp284ndash293 1996
[6] K Ordaz-Hernandez X Fischer and F Bennis ldquoModel reduc-tion technique for mechanical behaviour modelling efficiencycriteria and validity domain assessmentrdquo Journal of MechanicalEngineering Science vol 222 no 3 pp 493ndash505 2008
[7] M Hosseini and H Abbas ldquoNeural network approach forprediction of deflection of clamped beams struck by a massrdquoThin-Walled Structures vol 60 pp 222ndash228 2012
[8] R Guarize N A F Matos L V S Sagrilo and E C P LimaldquoNeural networks in the dynamic response analysis of slendermarine structuresrdquo Applied Ocean Research vol 29 no 4 pp191ndash198 2007
[9] N H Christiansen P E T Voie J Hoslashgsberg and N SoslashdahlldquoEfficient mooring line fatigue analysis using a hybrid methodtime domain simulation schemerdquo in Proceedings of the 32ndASME International Conference on Ocean Offshore and ArcticEngineering (OMAE rsquo13) vol 1 2013
[10] S A Solla E Levin and M Fleisher ldquoAccelerated learning inlayered neural networksrdquo Complex Systems vol 2 no 6 pp625ndash639 1988
[11] J B Hampshire and A H Waibel ldquoNovel objective functionfor improved phoneme recognition using time-delay neuralnetworksrdquo IEEE Transactions on Neural Networks vol 1 no 2pp 216ndash228 1990
[12] HAltincay andMDemirekler ldquoComparison of different objec-tive functions for optimal linear combination of classifiers forspeaker identificationrdquo in Proceedings of the IEEE InternationalConference on Acoustics Speech and Signal Processing (ICASSPrsquo01) vol 1 pp 401ndash404 May 2001
[13] MARINTEK SimoTheoryManual Sintef Trondheim Norway2009
[14] MARINTE Riflex Theory Manual Sintef Trondheim Norway2008
[15] C M Bishop Pattern Recognition and Machine LearningSpringer New York NY USA 2006
[16] C M Bishop Neural Networks for Pattern Recognition TheClarendon Press Oxford UK 1995
[17] R Fletcher Practical Methods of Optimization John Wiley ampSons Chichester UK 2nd edition 1987
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of