16
Research Article Modelling Laser Milling of Microcavities for the Manufacturing of DES with Ensembles Pedro Santos, 1 Daniel Teixidor, 2 Jesus Maudes, 1 and Joaquim Ciurana 2 1 Department of Civil Engineering, Higher Polytechnic School, University of Burgos, Cantabria Avenue, 09006 Burgos, Spain 2 Department of Mechanical Engineering and Industrial Construction, University of Girona, Maria Aurelia Capmany 61, 17003 Girona, Spain Correspondence should be addressed to Jesus Maudes; [email protected] Received 28 December 2013; Accepted 11 March 2014; Published 17 April 2014 Academic Editor: Aderemi Oluyinka Adewumi Copyright © 2014 Pedro Santos et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A set of designed experiments, involving the use of a pulsed Nd:YAG laser system milling 316L Stainless Steel, serve to study the laser-milling process of microcavities in the manufacture of drug-eluting stents (DES). Diameter, depth, and volume error are considered to be optimized as functions of the process parameters, which include laser intensity, pulse frequency, and scanning speed. Two different DES shapes are studied that combine semispheres and cylinders. Process inputs and outputs are defined by considering the process parameters that can be changed under industrial conditions and the industrial requirements of this manufacturing process. In total, 162 different conditions are tested in a process that is modeled with the following state-of-the- art data-mining regression techniques: Support Vector Regression, Ensembles, Artificial Neural Networks, Linear Regression, and Nearest Neighbor Regression. Ensemble regression emerged as the most suitable technique for studying this industrial problem. Specifically, Iterated Bagging ensembles with unpruned model trees outperformed the other methods in the tests. is method can predict the geometrical dimensions of the machined microcavities with relative errors related to the main average value in the range of 3 to 23%, which are considered very accurate predictions, in view of the characteristics of this innovative industrial task. 1. Introduction Laser-milling technology has become a viable alternative to conventional methods for producing complex microfeatures on difficult-to-process materials. It is increasingly employed in the industry, because of its established advantages [1]. As a noncontact material-removal process, laser machining removes smaller and more precise amounts of material, applies highly localized heat inputs to the workpiece, min- imizes distortion, involves no tool wear, and is not subject to certain constraints such as maximum tool force, buildup edge formation, and tool chatter. Micromanufacturing pro- cesses in the field of electronics and medical and biological applications are a growing area of research. High-resolution components, high precision, and small feature size are needed in this field, as well as real 3D fabrication. us, the use of laser machining to produce medical applications has become a growing area of research, one example of which is the fabrication of coronary stents. is research looks at the fabrication and performance of the DES. Some of these DES are metallic stents that include reservoirs that contain the polymer and the drug [2], such as the Janus TES stent [3] which incorporates microreservoirs cut into its abluminal side that are loaded with the drug. e selection of the laser system and the process parameters significantly affects the quality of the microfeature that is milled and the productivity of the process. Although there are several studies which deal with the effect of the process parameters on the quality of the final laser-milled parts, few of them study this effect on a microscale. e literature contains many examples of experimental research looking at the influence of scanning speed, pulse intensity, and pulse frequency on the quality and productivity of laser milling in different materials on a macroscale [47]. ere are many works on microscale machining that have investigated laser-machining processes in laser microdrilling [810], laser microcutting Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2014, Article ID 439091, 15 pages http://dx.doi.org/10.1155/2014/439091

Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

Embed Size (px)

Citation preview

Page 1: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

Research ArticleModelling Laser Milling of Microcavities forthe Manufacturing of DES with Ensembles

Pedro Santos1 Daniel Teixidor2 Jesus Maudes1 and Joaquim Ciurana2

1 Department of Civil Engineering Higher Polytechnic School University of Burgos Cantabria Avenue 09006 Burgos Spain2Department of Mechanical Engineering and Industrial Construction University of Girona Maria Aurelia Capmany 6117003 Girona Spain

Correspondence should be addressed to Jesus Maudes jmaudesubues

Received 28 December 2013 Accepted 11 March 2014 Published 17 April 2014

Academic Editor Aderemi Oluyinka Adewumi

Copyright copy 2014 Pedro Santos et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

A set of designed experiments involving the use of a pulsed NdYAG laser system milling 316L Stainless Steel serve to study thelaser-milling process of microcavities in the manufacture of drug-eluting stents (DES) Diameter depth and volume error areconsidered to be optimized as functions of the process parameters which include laser intensity pulse frequency and scanningspeed Two different DES shapes are studied that combine semispheres and cylinders Process inputs and outputs are definedby considering the process parameters that can be changed under industrial conditions and the industrial requirements of thismanufacturing process In total 162 different conditions are tested in a process that is modeled with the following state-of-the-art data-mining regression techniques Support Vector Regression Ensembles Artificial Neural Networks Linear Regression andNearest Neighbor Regression Ensemble regression emerged as the most suitable technique for studying this industrial problemSpecifically Iterated Bagging ensembles with unpruned model trees outperformed the other methods in the tests This method canpredict the geometrical dimensions of themachinedmicrocavities with relative errors related to themain average value in the rangeof 3 to 23 which are considered very accurate predictions in view of the characteristics of this innovative industrial task

1 Introduction

Laser-milling technology has become a viable alternative toconventional methods for producing complex microfeatureson difficult-to-process materials It is increasingly employedin the industry because of its established advantages [1]As a noncontact material-removal process laser machiningremoves smaller and more precise amounts of materialapplies highly localized heat inputs to the workpiece min-imizes distortion involves no tool wear and is not subjectto certain constraints such as maximum tool force buildupedge formation and tool chatter Micromanufacturing pro-cesses in the field of electronics and medical and biologicalapplications are a growing area of research High-resolutioncomponents high precision and small feature size are neededin this field as well as real 3D fabrication Thus the useof laser machining to produce medical applications hasbecome a growing area of research one example of which

is the fabrication of coronary stents This research looksat the fabrication and performance of the DES Some ofthese DES are metallic stents that include reservoirs thatcontain the polymer and the drug [2] such as the JanusTES stent [3] which incorporates microreservoirs cut into itsabluminal side that are loaded with the drug The selectionof the laser system and the process parameters significantlyaffects the quality of the microfeature that is milled andthe productivity of the process Although there are severalstudies which deal with the effect of the process parameterson the quality of the final laser-milled parts few of themstudy this effect on amicroscaleThe literature containsmanyexamples of experimental research looking at the influenceof scanning speed pulse intensity and pulse frequency onthe quality and productivity of laser milling in differentmaterials on a macroscale [4ndash7] There are many works onmicroscalemachining that have investigated laser-machiningprocesses in laser microdrilling [8ndash10] laser microcutting

Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2014 Article ID 439091 15 pageshttpdxdoiorg1011552014439091

2 Journal of Applied Mathematics

[11ndash13] and laser micromilling in 2D [14 15] However thereis little research on laser 3D micromilling Pfeiffer et al[16] studied the effects of laser-process parameters on theablation behaviour of tungsten carbide hard metal and steelusing a femtosecond laser for the generation of complex 3Dmicrostructures Karnakis et al [17] demonstrated the laser-milling capacity of a picoseconds laser in different materials(stainless steel alumina and fused silica) Surface topologyinformation was correlated with incident power density inorder to identify optimum processing Qi and Lai [18] used afiber laser tomachine complex shapesThey developed a ther-mal ablationmodel to determine the ablatedmaterial volumeand the dimensions and optimized the parameters to achievemaximum efficiency and minimum thermal effects FinallyTeixidor et al [19] studied the effects of scanning speed pulseintensity and pulse frequency on target width and depthdimensions and surface roughness for the laser milling ofmicrochannels on tool steel They presented a second-ordermodel and a multiobjective process optimization to predictthe responses and to find the optimum combinations ofprocess parameters

Although the manufacturing industry is interested inlaser micromilling and some research has been done tounderstand the main physical and industrial parameters thatdefine the performance of this process the conclusions showthat analytical approaches are necessary for all real casesdue to their complexity Data-mining approaches represent asuitable alternative to such tasks due to their capacity to dealwith multivariate processes and experimental uncertaintiesData-mining is defined in [20] as ldquothe process of discoveringpatterns in datardquo and ldquolsquoextractingrsquo or lsquominingrsquo knowledge fromlarge amounts of datardquo [21] ldquoUseful patterns allow us to makenontrivial predictions on new datardquo [20] (eg predictions onlaser-milling results based on experimental data) Many arti-ficial intelligence techniques have been applied tomacroscalemilling [22ndash24] but there are few examples of the applicationof such techniques to lasermicromilling [25] Artificial neuralnetworks (ANNs) have been proposed to predict the pulseenergy for a desired depth and diameter in micromilling [26]and the material removal rate (MRR) for a fixed ablationdepth depending on scanning velocity pulse frequency [27]and cut kerf quality in terms of dross adherence duringnonvertical laser cutting of 1mm thick mild-steel sheets[28] Finally regression trees have been proposed to opti-mize geometrical dimensions in the micromanufacturing ofmicrochannels [29] Neither of these studies used ensemblesfor process modeling a learning paradigm in which multiplelearners (or regressors) are combined to solve a problemA regressor ensemble can significantly improve the gener-alization ability of a single regressor and can provide betterresults than an individual regressor in many applications[30ndash32] Ensembles have demonstrated their suitability formodeling macroscale milling and drilling [33ndash36] especiallybecause they can achieve highly accurate prediction withlower tuning time of the model parameters [35] In view ofthe lack of published research on modeling the laser millingof 3D microgeometries with ensembles the objective of thiswork is to study themodeling capability of these data-mining

Table 1 Sphere geometry dimensions

Geometry Depth (120583m) 120601 (120583m) Volume (120583m3)Sphere 1 (e1) 50 166 721414Sphere 2 (e2) 70 140 718377Sphere 3 (e3) 90 124 724576

Table 2 Cylinder geometry dimensions

Geometry Depth (120583m) 120601 (120583m) Length (120583m) Volume (120583m3)Cylinder 1 (c1) 50 130 55 723220Cylinder 2 (c2) 70 110 46 721676Cylinder 3 (c3) 90 100 36 725707

Table 3 Factors and factor levels

Factors Factor levelsScanning speed (SS) (mms) 200 400 600Pulse intensity (PI) () 60 78 100Pulse frequency (PF) (kHz) 30 45 60

techniques through the different inputs and outputs that maybe considered priorities from an industrial point of view

2 Experimental Procedure andData Collection

The experimental apparatus described in this study gatheredthe data needed to create the models The experimentationconsisted of milling microcavities in a 316L Stainless Steelworkpiece using a laser system A Deckel Maho NdYAGLasertec 40 machine with a 1064 nm wavelength was usedto perform the experiments The system is a lamp-pumpedsolid-state laser that provides an ideal maximum (theoret-ically estimated) pulse intensity of 14Wcm2 [14] due tothe 100W average power and 30 120583m beam spot diameterThe SS316L workpiece material was selected because it is abiocompatible material commonly used in biomedical appli-cations and specifically for the fabrication of coronary stentsTwo different geometries were used for the experiments Thefirst geometry consisted of a half-spherical shape definedby depth and diameter dimensions The second geometrywas a half-cylindrical shape with a quarter sphere on bothsides defined by depth diameter and length dimensionsBoth geometries and an example of a laser-milled cavity arepresented in Figure 1 The geometries were fabricated withdifferent combinations of dimensions while maintaining thesame volume Tables 1 and 2 present the three combinations ofdimensions for the spherical and the cylindrical geometriesrespectivelyThese geometries and dimensions were selectedbecause they provide sufficient space to machine the cavitiesof these cardiovascular drug-eluting stent struts which is animportant part of their manufacturing process

A full factorial design of experiments was developed inorder to analyze the effects of pulse frequency (PF) scanningspeed (SS) and pulse intensity levels (PI percentage of the

Journal of Applied Mathematics 3

L

D120601

D

120601

Figure 1 Cavity geometries used in the experiments

ideal maximum pulse intensity) on the responses Somescreening experiments were performed to determine theproper parametric levels Three different levels were selectedfrom the results of each input factor which are presentedin Table 3 This design of experiments resulted in a total of162 experiments 27 combinations for each geometry understudy All the experiments were machined from the same316L SS blank under the same ambient conditions Theresponse variables were the cavity dimensions (depth andradius) and the volume of removed material A confocalAxio CSM 700 Carl Zeiss microscope was used for thedimensional measurements and for characterization of thecavities Moreover negatives of some of the samples wereobtained with surface replicant silicone in order to obtain 3DSEM images

Having performed the experimental tests the inputs andoutputs for the datasets had to be defined to generate thedata sets for the data-mining modeling On the whole theselection of the inputs is easy because they are set by thespecifications of the equipment the inputs are the parametersthat the process engineer can change in themachineThey arethe same as those considered to define the experimental testsexplained above Table 4 summarizes all the selected inputstheir units ranges and the relationship that they have withother inputs

The definition of the data set outputs takes different inter-ests into account that relate to the industrialmanufacturing ofDES In some cases a productivity orientation will encouragethe process engineer to optimize productivity (in terms of theMRR) keeping geometrical accuracy under certain acceptablethresholds (by fixing amaximum relative error in geometricalparameters) In other cases the geometrical accuracy will bethe main requirement and productivity will be a secondary

objective In yet other cases only one geometrical parameterfor example the depth of the DES will be critical and theother geometrical parameters should be kept under certainthresholds once again by fixing a maximum relative error forthese geometrical parameters Therefore this work considersthe geometrical dimensions and the MRR that is actuallyobtained as its output their deviance from the programmedvalues and the relative errors between the programmed andthe real values Table 5 summarizes all the calculated outputstheir units ranges and the relationship they have with otherinput or output variables In summary the 162 differentlaser conditions that were tested provided 14 data sets of 162instances each with 9 attributes and one output variable to bepredicted

3 Data-Mining Techniques

In our study we consider the analysis of each output variableseparately by defining a one-dimensional regression problemfor each case A regressor is a data-mining model in whichan output variable 119910 is modelled as a function 119891 of a set ofindependent variables 119909 called attributes in the conventionalnotation of data-mining where119898 is the number of attributesThe function is expressed as follows

119910estimated = 119891 (119909) 119909 isin 119877119898 119910 isin 119877 (1)

The aim of this work is to determine the most suitableregressor for this industrial problem The selection is per-formed by comparing the rootmean squared error (RMSE) ofseveral regressors over the data set Having a data collectionof 119899 pairs of real values 119909

119894 119910119894119894=119899

119894=1 the RMSE is an estimation

of the expected difference between the real and the forecasted

4 Journal of Applied Mathematics

Table 4 Input variables

Variable Units Range Relationshipx1 Programmed depth 120583m 50ndash90 Independentx2 Programmed radio 120583m 50ndash83 Independentx3 Programmed length 120583m 0ndash55 Independentx4 Programmed volume 10

3times 120583m3 718ndash726 43120587119909

3

2+ 12119909

2

21199093

x5 Intensity 60ndash100 Independentx6 Frequency KHz 30ndash60 Independentx7 Speed mms 200ndash600 Independentx8 Time s 9ndash24 Independentx9 Programmed MRR 10

3times 120583m3

s 30ndash81 11990941199098

Table 5 Output variables

Variable Units Range Relationshipy1 Measured volume 10

3times 120583m3 130ndash1701 Independent

y2 Measured depth 120583m 25ndash23060 Independenty3 Measured diameter 120583m 11850ndash20880 Independenty4 Measured length 120583m 0ndash7020 Independenty5 Measured MRR 10

3times 120583m3

s 12ndash121 11991011199098

y6 Volume error 103times 120583m3 minus980ndash596 119909

4minus 1199101

y7 Depth error 120583m minus18060ndash5690 1199091minus 1199092

y8 Width error 120583m minus6280ndashminus1663 21199092minus 1199103

y9 Length error 120583m minus2550ndash20880 1199093minus 1199104

y10 MRR error 103times 120583m3s minus61ndash66 119909

9minus 1199105

y11 Volume relative error Dimensionless minus136ndash082 11991061199094

y12 Depth relative error Dimensionless minus361ndash063 11991071199091

y13 Width relative error Dimensionless minus055ndash010 1199108(21199092)

y14 Length relative error Dimensionless minus071ndash015 11991091199093

output by a regressor It is expressed as the square root ofthe mean of the squares of the deviations as shown in thefollowing equation

RMSE =radic

sum119899

119905=1(119910119905minus 119891 (119909

119905))2

119899

(2)

We tested a wide range of the main families of state-of-the-art regression techniques as follows

(i) Function-based regressors we used two of themost popular algorithms Support Vector Regres-sion (SVR) [37] and ANNs [38] and also LinearRegression [39] which has an easier formulation thatallows direct physical interpretation of the modelsWe should note thewidespread use of SVR [40] whileANNshave been successfully applied to a great varietyof industrial modeling problems [41ndash43]

(ii) Instance-based methods specifically their most rep-resentative regressor k-nearest neighbors regressor[44] in this type of algorithm it is not necessaryto express an analytic relationship between the inputvariables and the output that is modeled an aspectthat makes this approach totally different from theother Instead of using an explicit formulation to

obtain a prediction it is calculated from set valuesstored in the training phase

(iii) Decision-tree-based regressors we have includedthese kinds of methods because they are used in theensembles as regressors as explained in Section 32

(iv) Ensemble techniques [45] are among the most pop-ular in the literature These methods have beensuccessfully applied to a wide variety of industrialproblems [34 46ndash49]

31 Linear Regression One of the most natural and simplestways of expressing relations between a set of inputs and anoutput is by using a linear function In this type of regressorthe variable to forecast is given by a linear combination of theattributes with predeterminedweights [39] as detailed in thefollowing equation

119910(119894)

estimated =

119896

sum

119895=0

(119908119895times 119909(119894)

119895) (3)

where 119910(119894)estimated denotes the output of the 119894th training instanceand 119909

(119894)

119895the jth attribute of the 119894th instance The sum of the

squares of the differences between real and forecasted output

Journal of Applied Mathematics 5

times1

times7

times9 times7

times6

times8

times7

times5

81833478 60174006 41546599

36805988 22336725 40669213

24656857

401199 29818975

lt60

lt500

lt300

lt69

ge60

ge500

lt500 ge500

ge300ge517955

ge375

ge69

lt11 ge11

lt375

lt517955

Figure 2 Model of the measured volume with a regression tree

is minimized to calculate the most adequate weights 119908119895

following the expression given in the following equation

119899

sum

119894=0

[

[

119910(119894)

minus

119896

sum

119895=0

(119908119895times 119909(119894)

119895)]

]

(4)

We used an improvement to this original formulation byselecting the attributes with the Akaike information criterion(AIC) [50] In (5) we can see how the AIC is defined wherek is the number of free parameters of the model (ie thenumber of input variables considered) and119875 is the probabilityof the estimation fitting the training dataThe aim is to obtainmodels that fit the training data but with as few parameters aspossible

AIC = minus2 ln (119875) + 2119896 (5)

32 Decision-Tree-Based Regressors The decision tree is adata-mining technique that builds hierarchical models thatare easily interpretable because they may be representedin graphical form as shown in Figures 2 and 3 with anexample of the output measured length as a function of theinput attributes In this type of model all the decisions areorganised around a single variable resulting in the final hier-archical structure This representation has three elementsthe nodes attributes taken for the decision (ellipses in therepresentation) the leaves final forecasted values (squares)

and these two elements being connected by arcs with thesplitting values for each attribute

As base regressors we have two types of regressors thestructure of which is based on decision trees regression treesand model trees Both families of algorithms are hierarchicalmodels represented by an abstract tree but they differ withregard to what their leaves store [51] In the case of regressiontrees a value is stored that represents the average value ofthe instances enclosed by the leaf while the model trees havea linear regression model that predicts the output value forthese instances The intrasubset variation in the class valuesdown each branch is minimized to build the initial tree [52]

In our experimentation we used one representativeimplementation of the two families reduced-error pruningtree (REPTree) [20] a regression tree and M5P [51] a modeltree In both cases we tested two configurations prunedand unpruned trees In the case of having one single treeas a regressor for some typologies of ensembles it is moreappropriate to prune the trees to avoid overfitting the trainingdata while some ensembles can take advantage of havingunpruned trees [53]

33 Ensemble Regressors An ensemble regressor combinesthe predictions of a set of so-called base regressors using avoting system [54] as we can see in Figure 4 Probably thethree most popular ensemble techniques are Bagging [55]Boosting [56] and Random Subspaces [53] For Bagging and

6 Journal of Applied Mathematics

times1

times5LM 1

LM 3LM 2

le60 gt60

le69

gt69

minus20796469lowastx1

y1 =

y1 = y1 =

+70976312lowastx2minus2658774lowastx3+5989864lowastx5minus5705179lowastx7+5011662636

minus11649482lowastx1+30870216lowastx2+2546212lowastx3+9627808lowastx5minus2879248lowastx7minus05008lowastx9+3306695834

minus11649482lowastx1+16528734lowastx3+7034302lowastx5minus539148lowastx7minus1497lowastx9+5231650737

Figure 3 Model of the measured volume with a model tree

Boosting the ensemble regressor is formed from a set ofweak base regressors trained by applying the same learningalgorithm to different sets obtained from the training setIn Bagging each base regressor is trained with a datasetobtained from random sampling with replacement [55] (iea particular instance may appear repeated several times ormay not be considered in any base regressor) As a resultthe base regressors are independent However Boosting usesall the instances and a set of weights to train each baseregressor Each instance has a weight pointing out howimportant it is to predict that instance correctly Some baseregressors can take weights into account (eg decision trees)Boosting trains base regressors sequentially because errorsfor training instances in the previous base regressor areused for reweighting The new base regressors are focusedon instances that previous base regressors have wronglypredictedThe voting system for Boosting is also weighted bythe accuracy of each base regressor [57]

Random Subspaces follow a different approach eachbase regressor is trained in a subset of fewer dimensionsthan the original space This subset of features is randomlychosen for all regressors This procedure is followed with theintention of avoiding the well-known problem of the curse ofdimensionality that occurs with many regressors when thereare many features as well as improving accuracy by choosing

base regressors with low correlations between them In totalwe have used five ensemble regression techniques of the stateof the art for regression two variants of Bagging one variantof Boosting and Random Subspaces The list of ensemblemethods used in the experimentation is enumerated below

(i) Bagging is in its initial formulation for regression(ii) Iterated Bagging combines several Bagging ensem-

bles the first one keeping to a typical construction andthe others using residuals (differences between thereal and the predicted values) for training purposes[58]

(iii) Random Subspaces are in their formulation forregression

(iv) AdaBoostR2 [59] is a boosting implementation forregression Calculated from the absolute errors ofeach training example 119897(119894) = |119891

119877(119909119894) minus 119910119894| the so-

called loss function 119871(119894) was used to estimate theerror of each base regressor and to assign a suitableweight to each one Let Den be the maximum valueof 119897(119894) in the training set then three different lossfunctions are used linear 119871

119897(119894) = 119897(119894)Den square

119871119878(119894) = [119897(119894)Den]2 and exponential 119871

119864(119894) = 1 minus

exp(minus119897(119894)Den)(v) Additive regression is this regressor that has a learn-

ing algorithm called Stochastic Gradient Boosting[60] which is a modification of Adaptive Bagging ahybrid Bagging Boosting procedure intended for leastsquares fitting on additive expansions [60]

34 k-Nearest Neighbor Regressor This regressor is the mostrepresentative algorithm among the instance-based learningThese kinds ofmethods forecast the output value using storedvalues of the most similar instances of the training data [61]The estimation is the mean of the k most similar traininginstances Two configuration decisions have to be taken

(i) how many nearest neighbors to use to forecast thevalue of a new instance

(ii) which distance function to use to measure the simi-larity between the instances

In our experimentation we have used the most commondefinition of the distance function the Euclidean distancewhile the number of neighbors is optimized using cross-validation

35 Support Vector Regressor This kind of regressor is basedon a parametric function the parameters of which areoptimized during the training process in order to minimizethe RMSE [62] Mathematically the goal is to find a function119891(119909) that has the most deviation 120598 from the targets thatare actually obtained 119910

119894 for all the training data and at the

same time is as flat as possible The following equation is anexample of SVR with the particular case of a linear functioncalled linear SVR where ⟨sdot sdot⟩ denotes the inner product inthe input space119883

119891 (119909) = ⟨119908 119909⟩ + 119887 with 119908 isin 119883 119887 isin 119877 (6)

Journal of Applied Mathematics 7

Final prediction

Voting systemsum

Base regressor 1 Base regressor 2 Base regressor n

y = f(x)

X

X

X X

Y

Y Y Y

X1 Y1

Xm Ym

middot middot middot middot middot middot middot middot middot middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

Leve

l2

outp

utLe

vel0

or

igin

al d

ata

Leve

l1

base

regr

esso

rs

Figure 4 Ensemble regressor architecture

The norms of 119908 have to be minimized to find a flat func-tion but in real data solving this optimization problem can beunfeasible In consequence Boser et al [37] introduced threeterms into the formulation the slack variables 120585 120585lowast and C atrade-off parameter between the flatness and the deviationsof the errors larger than 120598 In the following equation theoptimization problem is shown that is associated with a linearSVR

minimize 1

21199082+ 119862

119897

sum

119894=1

(120585119894+ 120585lowast

119894)

st 119910119894minus ⟨119908 119909

119894⟩ minus 119887 le 120598 + 120585

119894

⟨119908 119909119894⟩ + 119887 minus 119910

119894le 120598 + 120585

lowast

119894

120585119894 120585lowast

119894ge 0

(7)

We have an optimization problem of the convex typethat is solved in practice using the Lagrange method Theequations are rewritten using the primal objective functionand the corresponding constraints a process in which the so-called dual problem (see (8)) is obtained as follows

maximize minus1

2

119897

sum

119894119895=1

(120572119894minus 120572lowast

119894) (120572119895minus 120572lowast

119895) ⟨119909119894 119909119895⟩

minus 120598

119897

sum

119894=1

(120572119894+ 120572lowast

119894) +

119897

sum

119894=1

119910119894(120572119894minus 120572lowast

119894)

st119897

sum

119894=1

(120572119894minus 120572lowast

119894) = 0

120572119894 120572lowast

119894isin [0 119862]

(8)

From the expression in (8) it is possible to generalizethe formulation of the SVR in terms of a nonlinear function

Instead of calculating the inner products in the originalfeature space a kernel function 119896(119909 1199091015840) that computes theinner product in a transformed spacewas definedThis kernelfunction has to satisfy the so-called Mercerrsquos conditions [63]In our experimentation we have used the two most popularkernels in the literature [40] linear and radial basis

36 Artificial Neural Networks We used the multilayer per-ceptron (MLP) the most popular ANN variant [64] in ourexperimental work It has been demonstrated to be a univer-sal approximator of functions [65] ANNs are a particular caseof neural networks the mathematical formulation of whichis inspired by biological functions as they aim to emulatethe behavior of a set of neurons [64] This network has threelayers [66] one with the network inputs (features of the dataset) a hidden layer and an output layer where the predictionis assigned to each input instance Firstly the output of thehidden layer 119910hide is calculated from the inputs and then theoutput 119910hide is obtained according to the expressions shownin the following equation [66]

119910hide = 119891net (1198821119909 + 1198611)

119910output = 119891output (1198822119910hide + 1198612)

(9)

where 1198821is the weight matrix of the hidden layer 119861

1is the

bias of the hidden layer1198822is the weight matrix of the output

layer (eg the identitymatrix in the configurations tested)1198612

is the bias of the output layer 119891net is the activation functionof the hidden layer and 119891output is the activation function ofthe output layer These two functions depend on the chosenstructure but are typically the identity for the hidden layer and119905119886119899119904119894119892 for the output layer [66]

8 Journal of Applied Mathematics

Table 6 Methods notation

Bagging BGIterated Bagging IBRandom Subspaces RSAdaboostR2 R2Additive Regression ARREPTree RPM5P Model Tree M5PSupport Vector Regressor SVRMultilayer Perceptron MLP119896-Nearest Neighbor Regressor 119896NNLinear Regression LR

4 Results and Discussion

We compared the RMSE obtained for the regressors ina 10 times 10 cross-validation in order to choose the method thatis most suited to model this industrial problem The experi-ments were completed using theWEKA [20] implementationof the methods described above

All the ensembles under consideration have 100 baseregressors The methods that depend on a set of parametersare optimized as follows

(i) SVR with linear kernel the trade-off parameter 119862 inthe range 2ndash8

(ii) SVR with radial basis kernel 119862 from 1 to 16 and theparameter of the radial basis function gamma from10minus5 to 10

minus2(iii) multilayer perceptron the training parameters

momentum learning rate and number of neurons areoptimized in the ranges 01ndash04 01ndash06 and 5ndash15

(iv) kNN the number of neighbours is optimized from 1to 10

The notation used to describe the methods is detailed inTable 6

Regarding the notation two abbreviations have been usedbesides those that are indicated in Table 6 On the one handwe used the suffixes ldquoLrdquo ldquoSrdquo and ldquoErdquo for the linear square andexponential loss functions of AdaboostR2 and on the otherhand the trees that are either pruned (119875) or unpruned (119880)

appear between brackets Tables 7 8 and 9 set out the RMSEof each of the 14 outputs for each method

Finally a summary tablewith the bestmethods per outputis shown The indexes of the 39 methods that were tested areexplained in Tables 10 and 11 and in Table 12 the methodwith minimal RMSE is indicated according to the notationfor these indexes In the third column we also indicatethose methods that have larger RMSE but using a correctedresampled t-test [67] the differences are not statistically sig-nificative at a confidence level of 95 Analyzing the secondcolumn of Table 12 among the 39 configurations testedonly 4 methods obtained the best RMSE for one of the 14outputs AdaboostR2 with exponential loss and prunedM5Pas base regressorsmdashindex 26mdash(5 times) SVR with radial

basis function kernelmdashindex 6mdash(4 times) Iterated Baggingwith unprunedM5P as base regressorsmdashindex 15mdash(4 times)and Bagging with unpruned RP as the base regressor - index9mdash(1 time)

The performance of eachmethodmay be ranked Table 13presents the number of significative weaker performances ofeach method considering the 14 outputs modeled The mostrobust method is Iterated Bagging with unpruned M5P asbase regressors - index 15 - as in none of the 14 outputs wasit outperformed by other methods Besides selecting the bestmethod it is possible to obtain some additional conclusionsfrom this ranking table

(i) Linear models like SVR linearmdashindex 5mdash and LRmdashindex 3mdashdo not fit the datasets in the study very wellBothmethods are ranked together at themiddle of thetableThe predicted variables therefore needmethodsthat can operate with nonlinearities

(ii) SVR with Radial Basis Function Kernelmdashindex 6mdashis the only nonensemble method with competitiveresults but it needs to tune 2 parametersMLPmdashindex7mdashis not a good choice It needs to tune 3 parametersand is not a well-ranked method

(iii) For some ensemble configurations there are dif-ferences in the number of statistically significativeperformances between the results from pruned andunpruned trees while in other cases these differ-ences do not exist in general though the resultsof unpruned trees are more accurate specially withthe top-ranked methods In fact the only regressorwhich is not outperformed by other methods inany output has unpruned trees Unpruned trees aremore sensitive to changes in the training set Sothe predictions of unpruned trees when their baseregressors are trained in an ensemble are more likelyto output diverse predictions If the predictions of allbase regressors agreed there would be little benefit inusing ensembles Diversity balances faulty predictionsby some base regressors with correct predictions byothers

(iv) The top-ranked ensembles use the most accurate baseregressor (ie M5P) All M5P configurations havefewer weaker performances than the correspondingRP configuration In particular the lowest rank wasassigned to theAR -RP configurationswhileARM5PU came second best

(v) Ensembles that lose a lot of information such as RSare ranked at the bottom of the table The table showsthat the lower the percentage of features RS use theworse they perform In comparison to other ensemblemethods RS is a method that is very insensitive tonoise so it can point to data that are not noisy

(vi) In R2 M5P ensembles the loss function does notappear to be an important configuration parameter

(vii) IB M5P U is the only configuration that never hadsignificant losses when compared with the othermethods

Journal of Applied Mathematics 9

Table 7 Root mean squared error 13

Volume Depth Width Length MRRRP 21648227 2689 581 579 1716673M5P 21077329 2301 451 553 1607633LR 19752189 2384 704 1473 1501593119896NN 19375662 2357 542 684 1584242SVR linear 1978945 2419 707 1623 1480032SVR radial basis 20077424 1885 438 535 1460337MLP 20764699 2298 47 539 1629243BG RP P 20021292 2149 497 508 1552679BG RP U 2052908 1998 466 49 1560389BG M5P P 20063657 2061 443 527 1529392BG M5P U 19754675 195 431 538 150221IB RP P 20276785 2205 487 514 1567757IB RP U 21938893 217 501 511 1591196IB M5P P 19783383 208 442 488 1531801IB M5P U 19515416 1965 43 478 1483083R2-L RP P 19176523 2227 494 507 1578847R2-L RP U 20636982 2171 525 529 1700431R2-L M5P P 18684392 2081 437 484 1503137R2-L M5P U 1815874 2051 434 484 1520901R2-S RP P 19390839 2303 51 518 1600743R2-S RP U 20045321 2121 511 531 1640124R2-S M5P P 17311772 2083 449 471 1507005R2-S M5P U 17391436 2087 452 472 1524581R2-E RP P 19252931 2259 502 507 1585403R2-E RP U 20592086 2144 522 521 1731845R2-E M5P P 17105622 2125 453 466 1507801R2-E M5P U 17294895 2132 455 467 1509023AR RP P 21575039 2551 542 556 1724982AR RP U 27446743 2471 583 615 1862828AR M5P P 20880584 2227 444 508 160765AR M5P U 19457279 1958 451 494 1553829RS 50 RP P 20052636 2658 568 774 1585427RS 50 RP U 20194624 2533 535 754 1566928RS 50 M5P P 20101327 266 538 1542 1540277RS 50 M5P U 19916684 2565 539 1545 1534984RS 75 RP P 19927003 2333 527 598 1586158RS 75 RP U 20784567 2161 504 625 1625164RS 75 M5P P 19964896 2355 465 829 1522787RS 75 M5P U 19742035 2211 46 839 1529571

Once the best data-mining technique for this industrialtask is identified the industrial implementation of theseresults can follow the procedure outlined below

(1) The best model is run to predict one output bychanging two input variables of the process in smallsteps and maintaining a fixed value for the otherinputs

(2) 3D plots of the output related to the two varied inputsshould be generatedThe process engineer can extractinformation from these 3D plots on the best millingconditions

As an example of this methodology the following casewas built Two Iterated Bagging ensembles with unprunedM5P as their base regressors are built for two outputs thewidth and the depth errors respectively Then the modelswere run by varying two inputs in small steps across thetest range pulse intensity (PI) and scanning speed (SS) Thiscombination of inputs and outputs presents an immediateinterest from the industrial point of view because in aworkshop the DES geometry is fixed by the customer andthe process engineer can only change three parameters of thelaser milling process (pulse frequency (PF) scanning speed

10 Journal of Applied Mathematics

Table 8 Root mean squared error 23

Volume error Depth error Width error Length error MRR errorRP 21687574 3066 591 562 1635557M5P 21450091 198 619 509 1600984LR 19752193 2392 676 1467 1496309119896NN 19369636 2369 498 66 1463651SVR linear 19781788 2417 71 1623 1478477SVR radial basis 20078596 1898 442 537 1450461MLP 20675361 2196 493 537 1738269BG RP P 20103757 2318 462 498 1512257BG RP U 20534153 216 44 487 153272BG M5P P 20059431 1966 532 508 151625BG M5P U 19757549 1919 522 515 1486054IB RP P 207848 2305 478 518 1552556IB RP U 21663114 2379 479 52 1653231IB M5P P 20145065 1973 444 471 1517767IB M5P U 19826122 1944 433 458 1493358R2-L RP P 2013655 2408 464 503 1544266R2-L RP U 20850097 2395 487 538 167783R2-L M5P P 1847995 2061 468 477 147324R2-L M5P U 18374085 2071 47 481 1481765R2-S RP P 19559243 2475 465 524 1610762R2-S RP U 20101769 2287 471 539 1587167R2-S M5P P 17277515 2109 453 472 1445998R2-S M5P U 17389235 2088 452 474 1449738R2-E RP P 19565769 2426 462 512 1564502R2-E RP U 20627571 2438 482 535 1672126R2-E M5P P 17219689 2161 457 475 1432448R2-E M5P U 17335689 2161 458 479 1437136AR RP P 2149783 2777 549 543 1620807AR RP U 27192655 2743 529 61 2057127AR M5P P 21132963 198 455 466 1599584AR M5P U 1954975 1955 478 482 1544838RS 50 RP P 20077517 2854 541 693 1521765RS 50 RP U 2023542 2684 524 694 1507478RS 50 M5P P 20105445 2591 598 1181 1538734RS 50 M5P U 1992461 2561 579 1175 1494089RS 75 RP P 19950189 2586 49 573 1516501RS 75 RP U 20646756 245 475 606 157435RS 75 M5P P 20016047 2205 576 666 151852RS 75 M5P U 19743178 2179 553 675 1472975

and pulse intensity) In viewof these restrictions the engineerwill wish to know the expected errors for the DES geometrydepending on the laser parameters that can be changedThe rest of the inputs for the models (DES geometry) arefixed at 70 120583m depth 65 120583m width and 0 120583m length and

PF is fixed at 45KHz Figure 5 shows the 3D plots obtainedfrom these calculations allowing the process engineer tochoose the following milling conditions SS in the range of325ndash470mms and PI in the range of 74ndash91 for minimumerrors in DES depth and width

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

2 Journal of Applied Mathematics

[11ndash13] and laser micromilling in 2D [14 15] However thereis little research on laser 3D micromilling Pfeiffer et al[16] studied the effects of laser-process parameters on theablation behaviour of tungsten carbide hard metal and steelusing a femtosecond laser for the generation of complex 3Dmicrostructures Karnakis et al [17] demonstrated the laser-milling capacity of a picoseconds laser in different materials(stainless steel alumina and fused silica) Surface topologyinformation was correlated with incident power density inorder to identify optimum processing Qi and Lai [18] used afiber laser tomachine complex shapesThey developed a ther-mal ablationmodel to determine the ablatedmaterial volumeand the dimensions and optimized the parameters to achievemaximum efficiency and minimum thermal effects FinallyTeixidor et al [19] studied the effects of scanning speed pulseintensity and pulse frequency on target width and depthdimensions and surface roughness for the laser milling ofmicrochannels on tool steel They presented a second-ordermodel and a multiobjective process optimization to predictthe responses and to find the optimum combinations ofprocess parameters

Although the manufacturing industry is interested inlaser micromilling and some research has been done tounderstand the main physical and industrial parameters thatdefine the performance of this process the conclusions showthat analytical approaches are necessary for all real casesdue to their complexity Data-mining approaches represent asuitable alternative to such tasks due to their capacity to dealwith multivariate processes and experimental uncertaintiesData-mining is defined in [20] as ldquothe process of discoveringpatterns in datardquo and ldquolsquoextractingrsquo or lsquominingrsquo knowledge fromlarge amounts of datardquo [21] ldquoUseful patterns allow us to makenontrivial predictions on new datardquo [20] (eg predictions onlaser-milling results based on experimental data) Many arti-ficial intelligence techniques have been applied tomacroscalemilling [22ndash24] but there are few examples of the applicationof such techniques to lasermicromilling [25] Artificial neuralnetworks (ANNs) have been proposed to predict the pulseenergy for a desired depth and diameter in micromilling [26]and the material removal rate (MRR) for a fixed ablationdepth depending on scanning velocity pulse frequency [27]and cut kerf quality in terms of dross adherence duringnonvertical laser cutting of 1mm thick mild-steel sheets[28] Finally regression trees have been proposed to opti-mize geometrical dimensions in the micromanufacturing ofmicrochannels [29] Neither of these studies used ensemblesfor process modeling a learning paradigm in which multiplelearners (or regressors) are combined to solve a problemA regressor ensemble can significantly improve the gener-alization ability of a single regressor and can provide betterresults than an individual regressor in many applications[30ndash32] Ensembles have demonstrated their suitability formodeling macroscale milling and drilling [33ndash36] especiallybecause they can achieve highly accurate prediction withlower tuning time of the model parameters [35] In view ofthe lack of published research on modeling the laser millingof 3D microgeometries with ensembles the objective of thiswork is to study themodeling capability of these data-mining

Table 1 Sphere geometry dimensions

Geometry Depth (120583m) 120601 (120583m) Volume (120583m3)Sphere 1 (e1) 50 166 721414Sphere 2 (e2) 70 140 718377Sphere 3 (e3) 90 124 724576

Table 2 Cylinder geometry dimensions

Geometry Depth (120583m) 120601 (120583m) Length (120583m) Volume (120583m3)Cylinder 1 (c1) 50 130 55 723220Cylinder 2 (c2) 70 110 46 721676Cylinder 3 (c3) 90 100 36 725707

Table 3 Factors and factor levels

Factors Factor levelsScanning speed (SS) (mms) 200 400 600Pulse intensity (PI) () 60 78 100Pulse frequency (PF) (kHz) 30 45 60

techniques through the different inputs and outputs that maybe considered priorities from an industrial point of view

2 Experimental Procedure andData Collection

The experimental apparatus described in this study gatheredthe data needed to create the models The experimentationconsisted of milling microcavities in a 316L Stainless Steelworkpiece using a laser system A Deckel Maho NdYAGLasertec 40 machine with a 1064 nm wavelength was usedto perform the experiments The system is a lamp-pumpedsolid-state laser that provides an ideal maximum (theoret-ically estimated) pulse intensity of 14Wcm2 [14] due tothe 100W average power and 30 120583m beam spot diameterThe SS316L workpiece material was selected because it is abiocompatible material commonly used in biomedical appli-cations and specifically for the fabrication of coronary stentsTwo different geometries were used for the experiments Thefirst geometry consisted of a half-spherical shape definedby depth and diameter dimensions The second geometrywas a half-cylindrical shape with a quarter sphere on bothsides defined by depth diameter and length dimensionsBoth geometries and an example of a laser-milled cavity arepresented in Figure 1 The geometries were fabricated withdifferent combinations of dimensions while maintaining thesame volume Tables 1 and 2 present the three combinations ofdimensions for the spherical and the cylindrical geometriesrespectivelyThese geometries and dimensions were selectedbecause they provide sufficient space to machine the cavitiesof these cardiovascular drug-eluting stent struts which is animportant part of their manufacturing process

A full factorial design of experiments was developed inorder to analyze the effects of pulse frequency (PF) scanningspeed (SS) and pulse intensity levels (PI percentage of the

Journal of Applied Mathematics 3

L

D120601

D

120601

Figure 1 Cavity geometries used in the experiments

ideal maximum pulse intensity) on the responses Somescreening experiments were performed to determine theproper parametric levels Three different levels were selectedfrom the results of each input factor which are presentedin Table 3 This design of experiments resulted in a total of162 experiments 27 combinations for each geometry understudy All the experiments were machined from the same316L SS blank under the same ambient conditions Theresponse variables were the cavity dimensions (depth andradius) and the volume of removed material A confocalAxio CSM 700 Carl Zeiss microscope was used for thedimensional measurements and for characterization of thecavities Moreover negatives of some of the samples wereobtained with surface replicant silicone in order to obtain 3DSEM images

Having performed the experimental tests the inputs andoutputs for the datasets had to be defined to generate thedata sets for the data-mining modeling On the whole theselection of the inputs is easy because they are set by thespecifications of the equipment the inputs are the parametersthat the process engineer can change in themachineThey arethe same as those considered to define the experimental testsexplained above Table 4 summarizes all the selected inputstheir units ranges and the relationship that they have withother inputs

The definition of the data set outputs takes different inter-ests into account that relate to the industrialmanufacturing ofDES In some cases a productivity orientation will encouragethe process engineer to optimize productivity (in terms of theMRR) keeping geometrical accuracy under certain acceptablethresholds (by fixing amaximum relative error in geometricalparameters) In other cases the geometrical accuracy will bethe main requirement and productivity will be a secondary

objective In yet other cases only one geometrical parameterfor example the depth of the DES will be critical and theother geometrical parameters should be kept under certainthresholds once again by fixing a maximum relative error forthese geometrical parameters Therefore this work considersthe geometrical dimensions and the MRR that is actuallyobtained as its output their deviance from the programmedvalues and the relative errors between the programmed andthe real values Table 5 summarizes all the calculated outputstheir units ranges and the relationship they have with otherinput or output variables In summary the 162 differentlaser conditions that were tested provided 14 data sets of 162instances each with 9 attributes and one output variable to bepredicted

3 Data-Mining Techniques

In our study we consider the analysis of each output variableseparately by defining a one-dimensional regression problemfor each case A regressor is a data-mining model in whichan output variable 119910 is modelled as a function 119891 of a set ofindependent variables 119909 called attributes in the conventionalnotation of data-mining where119898 is the number of attributesThe function is expressed as follows

119910estimated = 119891 (119909) 119909 isin 119877119898 119910 isin 119877 (1)

The aim of this work is to determine the most suitableregressor for this industrial problem The selection is per-formed by comparing the rootmean squared error (RMSE) ofseveral regressors over the data set Having a data collectionof 119899 pairs of real values 119909

119894 119910119894119894=119899

119894=1 the RMSE is an estimation

of the expected difference between the real and the forecasted

4 Journal of Applied Mathematics

Table 4 Input variables

Variable Units Range Relationshipx1 Programmed depth 120583m 50ndash90 Independentx2 Programmed radio 120583m 50ndash83 Independentx3 Programmed length 120583m 0ndash55 Independentx4 Programmed volume 10

3times 120583m3 718ndash726 43120587119909

3

2+ 12119909

2

21199093

x5 Intensity 60ndash100 Independentx6 Frequency KHz 30ndash60 Independentx7 Speed mms 200ndash600 Independentx8 Time s 9ndash24 Independentx9 Programmed MRR 10

3times 120583m3

s 30ndash81 11990941199098

Table 5 Output variables

Variable Units Range Relationshipy1 Measured volume 10

3times 120583m3 130ndash1701 Independent

y2 Measured depth 120583m 25ndash23060 Independenty3 Measured diameter 120583m 11850ndash20880 Independenty4 Measured length 120583m 0ndash7020 Independenty5 Measured MRR 10

3times 120583m3

s 12ndash121 11991011199098

y6 Volume error 103times 120583m3 minus980ndash596 119909

4minus 1199101

y7 Depth error 120583m minus18060ndash5690 1199091minus 1199092

y8 Width error 120583m minus6280ndashminus1663 21199092minus 1199103

y9 Length error 120583m minus2550ndash20880 1199093minus 1199104

y10 MRR error 103times 120583m3s minus61ndash66 119909

9minus 1199105

y11 Volume relative error Dimensionless minus136ndash082 11991061199094

y12 Depth relative error Dimensionless minus361ndash063 11991071199091

y13 Width relative error Dimensionless minus055ndash010 1199108(21199092)

y14 Length relative error Dimensionless minus071ndash015 11991091199093

output by a regressor It is expressed as the square root ofthe mean of the squares of the deviations as shown in thefollowing equation

RMSE =radic

sum119899

119905=1(119910119905minus 119891 (119909

119905))2

119899

(2)

We tested a wide range of the main families of state-of-the-art regression techniques as follows

(i) Function-based regressors we used two of themost popular algorithms Support Vector Regres-sion (SVR) [37] and ANNs [38] and also LinearRegression [39] which has an easier formulation thatallows direct physical interpretation of the modelsWe should note thewidespread use of SVR [40] whileANNshave been successfully applied to a great varietyof industrial modeling problems [41ndash43]

(ii) Instance-based methods specifically their most rep-resentative regressor k-nearest neighbors regressor[44] in this type of algorithm it is not necessaryto express an analytic relationship between the inputvariables and the output that is modeled an aspectthat makes this approach totally different from theother Instead of using an explicit formulation to

obtain a prediction it is calculated from set valuesstored in the training phase

(iii) Decision-tree-based regressors we have includedthese kinds of methods because they are used in theensembles as regressors as explained in Section 32

(iv) Ensemble techniques [45] are among the most pop-ular in the literature These methods have beensuccessfully applied to a wide variety of industrialproblems [34 46ndash49]

31 Linear Regression One of the most natural and simplestways of expressing relations between a set of inputs and anoutput is by using a linear function In this type of regressorthe variable to forecast is given by a linear combination of theattributes with predeterminedweights [39] as detailed in thefollowing equation

119910(119894)

estimated =

119896

sum

119895=0

(119908119895times 119909(119894)

119895) (3)

where 119910(119894)estimated denotes the output of the 119894th training instanceand 119909

(119894)

119895the jth attribute of the 119894th instance The sum of the

squares of the differences between real and forecasted output

Journal of Applied Mathematics 5

times1

times7

times9 times7

times6

times8

times7

times5

81833478 60174006 41546599

36805988 22336725 40669213

24656857

401199 29818975

lt60

lt500

lt300

lt69

ge60

ge500

lt500 ge500

ge300ge517955

ge375

ge69

lt11 ge11

lt375

lt517955

Figure 2 Model of the measured volume with a regression tree

is minimized to calculate the most adequate weights 119908119895

following the expression given in the following equation

119899

sum

119894=0

[

[

119910(119894)

minus

119896

sum

119895=0

(119908119895times 119909(119894)

119895)]

]

(4)

We used an improvement to this original formulation byselecting the attributes with the Akaike information criterion(AIC) [50] In (5) we can see how the AIC is defined wherek is the number of free parameters of the model (ie thenumber of input variables considered) and119875 is the probabilityof the estimation fitting the training dataThe aim is to obtainmodels that fit the training data but with as few parameters aspossible

AIC = minus2 ln (119875) + 2119896 (5)

32 Decision-Tree-Based Regressors The decision tree is adata-mining technique that builds hierarchical models thatare easily interpretable because they may be representedin graphical form as shown in Figures 2 and 3 with anexample of the output measured length as a function of theinput attributes In this type of model all the decisions areorganised around a single variable resulting in the final hier-archical structure This representation has three elementsthe nodes attributes taken for the decision (ellipses in therepresentation) the leaves final forecasted values (squares)

and these two elements being connected by arcs with thesplitting values for each attribute

As base regressors we have two types of regressors thestructure of which is based on decision trees regression treesand model trees Both families of algorithms are hierarchicalmodels represented by an abstract tree but they differ withregard to what their leaves store [51] In the case of regressiontrees a value is stored that represents the average value ofthe instances enclosed by the leaf while the model trees havea linear regression model that predicts the output value forthese instances The intrasubset variation in the class valuesdown each branch is minimized to build the initial tree [52]

In our experimentation we used one representativeimplementation of the two families reduced-error pruningtree (REPTree) [20] a regression tree and M5P [51] a modeltree In both cases we tested two configurations prunedand unpruned trees In the case of having one single treeas a regressor for some typologies of ensembles it is moreappropriate to prune the trees to avoid overfitting the trainingdata while some ensembles can take advantage of havingunpruned trees [53]

33 Ensemble Regressors An ensemble regressor combinesthe predictions of a set of so-called base regressors using avoting system [54] as we can see in Figure 4 Probably thethree most popular ensemble techniques are Bagging [55]Boosting [56] and Random Subspaces [53] For Bagging and

6 Journal of Applied Mathematics

times1

times5LM 1

LM 3LM 2

le60 gt60

le69

gt69

minus20796469lowastx1

y1 =

y1 = y1 =

+70976312lowastx2minus2658774lowastx3+5989864lowastx5minus5705179lowastx7+5011662636

minus11649482lowastx1+30870216lowastx2+2546212lowastx3+9627808lowastx5minus2879248lowastx7minus05008lowastx9+3306695834

minus11649482lowastx1+16528734lowastx3+7034302lowastx5minus539148lowastx7minus1497lowastx9+5231650737

Figure 3 Model of the measured volume with a model tree

Boosting the ensemble regressor is formed from a set ofweak base regressors trained by applying the same learningalgorithm to different sets obtained from the training setIn Bagging each base regressor is trained with a datasetobtained from random sampling with replacement [55] (iea particular instance may appear repeated several times ormay not be considered in any base regressor) As a resultthe base regressors are independent However Boosting usesall the instances and a set of weights to train each baseregressor Each instance has a weight pointing out howimportant it is to predict that instance correctly Some baseregressors can take weights into account (eg decision trees)Boosting trains base regressors sequentially because errorsfor training instances in the previous base regressor areused for reweighting The new base regressors are focusedon instances that previous base regressors have wronglypredictedThe voting system for Boosting is also weighted bythe accuracy of each base regressor [57]

Random Subspaces follow a different approach eachbase regressor is trained in a subset of fewer dimensionsthan the original space This subset of features is randomlychosen for all regressors This procedure is followed with theintention of avoiding the well-known problem of the curse ofdimensionality that occurs with many regressors when thereare many features as well as improving accuracy by choosing

base regressors with low correlations between them In totalwe have used five ensemble regression techniques of the stateof the art for regression two variants of Bagging one variantof Boosting and Random Subspaces The list of ensemblemethods used in the experimentation is enumerated below

(i) Bagging is in its initial formulation for regression(ii) Iterated Bagging combines several Bagging ensem-

bles the first one keeping to a typical construction andthe others using residuals (differences between thereal and the predicted values) for training purposes[58]

(iii) Random Subspaces are in their formulation forregression

(iv) AdaBoostR2 [59] is a boosting implementation forregression Calculated from the absolute errors ofeach training example 119897(119894) = |119891

119877(119909119894) minus 119910119894| the so-

called loss function 119871(119894) was used to estimate theerror of each base regressor and to assign a suitableweight to each one Let Den be the maximum valueof 119897(119894) in the training set then three different lossfunctions are used linear 119871

119897(119894) = 119897(119894)Den square

119871119878(119894) = [119897(119894)Den]2 and exponential 119871

119864(119894) = 1 minus

exp(minus119897(119894)Den)(v) Additive regression is this regressor that has a learn-

ing algorithm called Stochastic Gradient Boosting[60] which is a modification of Adaptive Bagging ahybrid Bagging Boosting procedure intended for leastsquares fitting on additive expansions [60]

34 k-Nearest Neighbor Regressor This regressor is the mostrepresentative algorithm among the instance-based learningThese kinds ofmethods forecast the output value using storedvalues of the most similar instances of the training data [61]The estimation is the mean of the k most similar traininginstances Two configuration decisions have to be taken

(i) how many nearest neighbors to use to forecast thevalue of a new instance

(ii) which distance function to use to measure the simi-larity between the instances

In our experimentation we have used the most commondefinition of the distance function the Euclidean distancewhile the number of neighbors is optimized using cross-validation

35 Support Vector Regressor This kind of regressor is basedon a parametric function the parameters of which areoptimized during the training process in order to minimizethe RMSE [62] Mathematically the goal is to find a function119891(119909) that has the most deviation 120598 from the targets thatare actually obtained 119910

119894 for all the training data and at the

same time is as flat as possible The following equation is anexample of SVR with the particular case of a linear functioncalled linear SVR where ⟨sdot sdot⟩ denotes the inner product inthe input space119883

119891 (119909) = ⟨119908 119909⟩ + 119887 with 119908 isin 119883 119887 isin 119877 (6)

Journal of Applied Mathematics 7

Final prediction

Voting systemsum

Base regressor 1 Base regressor 2 Base regressor n

y = f(x)

X

X

X X

Y

Y Y Y

X1 Y1

Xm Ym

middot middot middot middot middot middot middot middot middot middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

Leve

l2

outp

utLe

vel0

or

igin

al d

ata

Leve

l1

base

regr

esso

rs

Figure 4 Ensemble regressor architecture

The norms of 119908 have to be minimized to find a flat func-tion but in real data solving this optimization problem can beunfeasible In consequence Boser et al [37] introduced threeterms into the formulation the slack variables 120585 120585lowast and C atrade-off parameter between the flatness and the deviationsof the errors larger than 120598 In the following equation theoptimization problem is shown that is associated with a linearSVR

minimize 1

21199082+ 119862

119897

sum

119894=1

(120585119894+ 120585lowast

119894)

st 119910119894minus ⟨119908 119909

119894⟩ minus 119887 le 120598 + 120585

119894

⟨119908 119909119894⟩ + 119887 minus 119910

119894le 120598 + 120585

lowast

119894

120585119894 120585lowast

119894ge 0

(7)

We have an optimization problem of the convex typethat is solved in practice using the Lagrange method Theequations are rewritten using the primal objective functionand the corresponding constraints a process in which the so-called dual problem (see (8)) is obtained as follows

maximize minus1

2

119897

sum

119894119895=1

(120572119894minus 120572lowast

119894) (120572119895minus 120572lowast

119895) ⟨119909119894 119909119895⟩

minus 120598

119897

sum

119894=1

(120572119894+ 120572lowast

119894) +

119897

sum

119894=1

119910119894(120572119894minus 120572lowast

119894)

st119897

sum

119894=1

(120572119894minus 120572lowast

119894) = 0

120572119894 120572lowast

119894isin [0 119862]

(8)

From the expression in (8) it is possible to generalizethe formulation of the SVR in terms of a nonlinear function

Instead of calculating the inner products in the originalfeature space a kernel function 119896(119909 1199091015840) that computes theinner product in a transformed spacewas definedThis kernelfunction has to satisfy the so-called Mercerrsquos conditions [63]In our experimentation we have used the two most popularkernels in the literature [40] linear and radial basis

36 Artificial Neural Networks We used the multilayer per-ceptron (MLP) the most popular ANN variant [64] in ourexperimental work It has been demonstrated to be a univer-sal approximator of functions [65] ANNs are a particular caseof neural networks the mathematical formulation of whichis inspired by biological functions as they aim to emulatethe behavior of a set of neurons [64] This network has threelayers [66] one with the network inputs (features of the dataset) a hidden layer and an output layer where the predictionis assigned to each input instance Firstly the output of thehidden layer 119910hide is calculated from the inputs and then theoutput 119910hide is obtained according to the expressions shownin the following equation [66]

119910hide = 119891net (1198821119909 + 1198611)

119910output = 119891output (1198822119910hide + 1198612)

(9)

where 1198821is the weight matrix of the hidden layer 119861

1is the

bias of the hidden layer1198822is the weight matrix of the output

layer (eg the identitymatrix in the configurations tested)1198612

is the bias of the output layer 119891net is the activation functionof the hidden layer and 119891output is the activation function ofthe output layer These two functions depend on the chosenstructure but are typically the identity for the hidden layer and119905119886119899119904119894119892 for the output layer [66]

8 Journal of Applied Mathematics

Table 6 Methods notation

Bagging BGIterated Bagging IBRandom Subspaces RSAdaboostR2 R2Additive Regression ARREPTree RPM5P Model Tree M5PSupport Vector Regressor SVRMultilayer Perceptron MLP119896-Nearest Neighbor Regressor 119896NNLinear Regression LR

4 Results and Discussion

We compared the RMSE obtained for the regressors ina 10 times 10 cross-validation in order to choose the method thatis most suited to model this industrial problem The experi-ments were completed using theWEKA [20] implementationof the methods described above

All the ensembles under consideration have 100 baseregressors The methods that depend on a set of parametersare optimized as follows

(i) SVR with linear kernel the trade-off parameter 119862 inthe range 2ndash8

(ii) SVR with radial basis kernel 119862 from 1 to 16 and theparameter of the radial basis function gamma from10minus5 to 10

minus2(iii) multilayer perceptron the training parameters

momentum learning rate and number of neurons areoptimized in the ranges 01ndash04 01ndash06 and 5ndash15

(iv) kNN the number of neighbours is optimized from 1to 10

The notation used to describe the methods is detailed inTable 6

Regarding the notation two abbreviations have been usedbesides those that are indicated in Table 6 On the one handwe used the suffixes ldquoLrdquo ldquoSrdquo and ldquoErdquo for the linear square andexponential loss functions of AdaboostR2 and on the otherhand the trees that are either pruned (119875) or unpruned (119880)

appear between brackets Tables 7 8 and 9 set out the RMSEof each of the 14 outputs for each method

Finally a summary tablewith the bestmethods per outputis shown The indexes of the 39 methods that were tested areexplained in Tables 10 and 11 and in Table 12 the methodwith minimal RMSE is indicated according to the notationfor these indexes In the third column we also indicatethose methods that have larger RMSE but using a correctedresampled t-test [67] the differences are not statistically sig-nificative at a confidence level of 95 Analyzing the secondcolumn of Table 12 among the 39 configurations testedonly 4 methods obtained the best RMSE for one of the 14outputs AdaboostR2 with exponential loss and prunedM5Pas base regressorsmdashindex 26mdash(5 times) SVR with radial

basis function kernelmdashindex 6mdash(4 times) Iterated Baggingwith unprunedM5P as base regressorsmdashindex 15mdash(4 times)and Bagging with unpruned RP as the base regressor - index9mdash(1 time)

The performance of eachmethodmay be ranked Table 13presents the number of significative weaker performances ofeach method considering the 14 outputs modeled The mostrobust method is Iterated Bagging with unpruned M5P asbase regressors - index 15 - as in none of the 14 outputs wasit outperformed by other methods Besides selecting the bestmethod it is possible to obtain some additional conclusionsfrom this ranking table

(i) Linear models like SVR linearmdashindex 5mdash and LRmdashindex 3mdashdo not fit the datasets in the study very wellBothmethods are ranked together at themiddle of thetableThe predicted variables therefore needmethodsthat can operate with nonlinearities

(ii) SVR with Radial Basis Function Kernelmdashindex 6mdashis the only nonensemble method with competitiveresults but it needs to tune 2 parametersMLPmdashindex7mdashis not a good choice It needs to tune 3 parametersand is not a well-ranked method

(iii) For some ensemble configurations there are dif-ferences in the number of statistically significativeperformances between the results from pruned andunpruned trees while in other cases these differ-ences do not exist in general though the resultsof unpruned trees are more accurate specially withthe top-ranked methods In fact the only regressorwhich is not outperformed by other methods inany output has unpruned trees Unpruned trees aremore sensitive to changes in the training set Sothe predictions of unpruned trees when their baseregressors are trained in an ensemble are more likelyto output diverse predictions If the predictions of allbase regressors agreed there would be little benefit inusing ensembles Diversity balances faulty predictionsby some base regressors with correct predictions byothers

(iv) The top-ranked ensembles use the most accurate baseregressor (ie M5P) All M5P configurations havefewer weaker performances than the correspondingRP configuration In particular the lowest rank wasassigned to theAR -RP configurationswhileARM5PU came second best

(v) Ensembles that lose a lot of information such as RSare ranked at the bottom of the table The table showsthat the lower the percentage of features RS use theworse they perform In comparison to other ensemblemethods RS is a method that is very insensitive tonoise so it can point to data that are not noisy

(vi) In R2 M5P ensembles the loss function does notappear to be an important configuration parameter

(vii) IB M5P U is the only configuration that never hadsignificant losses when compared with the othermethods

Journal of Applied Mathematics 9

Table 7 Root mean squared error 13

Volume Depth Width Length MRRRP 21648227 2689 581 579 1716673M5P 21077329 2301 451 553 1607633LR 19752189 2384 704 1473 1501593119896NN 19375662 2357 542 684 1584242SVR linear 1978945 2419 707 1623 1480032SVR radial basis 20077424 1885 438 535 1460337MLP 20764699 2298 47 539 1629243BG RP P 20021292 2149 497 508 1552679BG RP U 2052908 1998 466 49 1560389BG M5P P 20063657 2061 443 527 1529392BG M5P U 19754675 195 431 538 150221IB RP P 20276785 2205 487 514 1567757IB RP U 21938893 217 501 511 1591196IB M5P P 19783383 208 442 488 1531801IB M5P U 19515416 1965 43 478 1483083R2-L RP P 19176523 2227 494 507 1578847R2-L RP U 20636982 2171 525 529 1700431R2-L M5P P 18684392 2081 437 484 1503137R2-L M5P U 1815874 2051 434 484 1520901R2-S RP P 19390839 2303 51 518 1600743R2-S RP U 20045321 2121 511 531 1640124R2-S M5P P 17311772 2083 449 471 1507005R2-S M5P U 17391436 2087 452 472 1524581R2-E RP P 19252931 2259 502 507 1585403R2-E RP U 20592086 2144 522 521 1731845R2-E M5P P 17105622 2125 453 466 1507801R2-E M5P U 17294895 2132 455 467 1509023AR RP P 21575039 2551 542 556 1724982AR RP U 27446743 2471 583 615 1862828AR M5P P 20880584 2227 444 508 160765AR M5P U 19457279 1958 451 494 1553829RS 50 RP P 20052636 2658 568 774 1585427RS 50 RP U 20194624 2533 535 754 1566928RS 50 M5P P 20101327 266 538 1542 1540277RS 50 M5P U 19916684 2565 539 1545 1534984RS 75 RP P 19927003 2333 527 598 1586158RS 75 RP U 20784567 2161 504 625 1625164RS 75 M5P P 19964896 2355 465 829 1522787RS 75 M5P U 19742035 2211 46 839 1529571

Once the best data-mining technique for this industrialtask is identified the industrial implementation of theseresults can follow the procedure outlined below

(1) The best model is run to predict one output bychanging two input variables of the process in smallsteps and maintaining a fixed value for the otherinputs

(2) 3D plots of the output related to the two varied inputsshould be generatedThe process engineer can extractinformation from these 3D plots on the best millingconditions

As an example of this methodology the following casewas built Two Iterated Bagging ensembles with unprunedM5P as their base regressors are built for two outputs thewidth and the depth errors respectively Then the modelswere run by varying two inputs in small steps across thetest range pulse intensity (PI) and scanning speed (SS) Thiscombination of inputs and outputs presents an immediateinterest from the industrial point of view because in aworkshop the DES geometry is fixed by the customer andthe process engineer can only change three parameters of thelaser milling process (pulse frequency (PF) scanning speed

10 Journal of Applied Mathematics

Table 8 Root mean squared error 23

Volume error Depth error Width error Length error MRR errorRP 21687574 3066 591 562 1635557M5P 21450091 198 619 509 1600984LR 19752193 2392 676 1467 1496309119896NN 19369636 2369 498 66 1463651SVR linear 19781788 2417 71 1623 1478477SVR radial basis 20078596 1898 442 537 1450461MLP 20675361 2196 493 537 1738269BG RP P 20103757 2318 462 498 1512257BG RP U 20534153 216 44 487 153272BG M5P P 20059431 1966 532 508 151625BG M5P U 19757549 1919 522 515 1486054IB RP P 207848 2305 478 518 1552556IB RP U 21663114 2379 479 52 1653231IB M5P P 20145065 1973 444 471 1517767IB M5P U 19826122 1944 433 458 1493358R2-L RP P 2013655 2408 464 503 1544266R2-L RP U 20850097 2395 487 538 167783R2-L M5P P 1847995 2061 468 477 147324R2-L M5P U 18374085 2071 47 481 1481765R2-S RP P 19559243 2475 465 524 1610762R2-S RP U 20101769 2287 471 539 1587167R2-S M5P P 17277515 2109 453 472 1445998R2-S M5P U 17389235 2088 452 474 1449738R2-E RP P 19565769 2426 462 512 1564502R2-E RP U 20627571 2438 482 535 1672126R2-E M5P P 17219689 2161 457 475 1432448R2-E M5P U 17335689 2161 458 479 1437136AR RP P 2149783 2777 549 543 1620807AR RP U 27192655 2743 529 61 2057127AR M5P P 21132963 198 455 466 1599584AR M5P U 1954975 1955 478 482 1544838RS 50 RP P 20077517 2854 541 693 1521765RS 50 RP U 2023542 2684 524 694 1507478RS 50 M5P P 20105445 2591 598 1181 1538734RS 50 M5P U 1992461 2561 579 1175 1494089RS 75 RP P 19950189 2586 49 573 1516501RS 75 RP U 20646756 245 475 606 157435RS 75 M5P P 20016047 2205 576 666 151852RS 75 M5P U 19743178 2179 553 675 1472975

and pulse intensity) In viewof these restrictions the engineerwill wish to know the expected errors for the DES geometrydepending on the laser parameters that can be changedThe rest of the inputs for the models (DES geometry) arefixed at 70 120583m depth 65 120583m width and 0 120583m length and

PF is fixed at 45KHz Figure 5 shows the 3D plots obtainedfrom these calculations allowing the process engineer tochoose the following milling conditions SS in the range of325ndash470mms and PI in the range of 74ndash91 for minimumerrors in DES depth and width

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

Journal of Applied Mathematics 3

L

D120601

D

120601

Figure 1 Cavity geometries used in the experiments

ideal maximum pulse intensity) on the responses Somescreening experiments were performed to determine theproper parametric levels Three different levels were selectedfrom the results of each input factor which are presentedin Table 3 This design of experiments resulted in a total of162 experiments 27 combinations for each geometry understudy All the experiments were machined from the same316L SS blank under the same ambient conditions Theresponse variables were the cavity dimensions (depth andradius) and the volume of removed material A confocalAxio CSM 700 Carl Zeiss microscope was used for thedimensional measurements and for characterization of thecavities Moreover negatives of some of the samples wereobtained with surface replicant silicone in order to obtain 3DSEM images

Having performed the experimental tests the inputs andoutputs for the datasets had to be defined to generate thedata sets for the data-mining modeling On the whole theselection of the inputs is easy because they are set by thespecifications of the equipment the inputs are the parametersthat the process engineer can change in themachineThey arethe same as those considered to define the experimental testsexplained above Table 4 summarizes all the selected inputstheir units ranges and the relationship that they have withother inputs

The definition of the data set outputs takes different inter-ests into account that relate to the industrialmanufacturing ofDES In some cases a productivity orientation will encouragethe process engineer to optimize productivity (in terms of theMRR) keeping geometrical accuracy under certain acceptablethresholds (by fixing amaximum relative error in geometricalparameters) In other cases the geometrical accuracy will bethe main requirement and productivity will be a secondary

objective In yet other cases only one geometrical parameterfor example the depth of the DES will be critical and theother geometrical parameters should be kept under certainthresholds once again by fixing a maximum relative error forthese geometrical parameters Therefore this work considersthe geometrical dimensions and the MRR that is actuallyobtained as its output their deviance from the programmedvalues and the relative errors between the programmed andthe real values Table 5 summarizes all the calculated outputstheir units ranges and the relationship they have with otherinput or output variables In summary the 162 differentlaser conditions that were tested provided 14 data sets of 162instances each with 9 attributes and one output variable to bepredicted

3 Data-Mining Techniques

In our study we consider the analysis of each output variableseparately by defining a one-dimensional regression problemfor each case A regressor is a data-mining model in whichan output variable 119910 is modelled as a function 119891 of a set ofindependent variables 119909 called attributes in the conventionalnotation of data-mining where119898 is the number of attributesThe function is expressed as follows

119910estimated = 119891 (119909) 119909 isin 119877119898 119910 isin 119877 (1)

The aim of this work is to determine the most suitableregressor for this industrial problem The selection is per-formed by comparing the rootmean squared error (RMSE) ofseveral regressors over the data set Having a data collectionof 119899 pairs of real values 119909

119894 119910119894119894=119899

119894=1 the RMSE is an estimation

of the expected difference between the real and the forecasted

4 Journal of Applied Mathematics

Table 4 Input variables

Variable Units Range Relationshipx1 Programmed depth 120583m 50ndash90 Independentx2 Programmed radio 120583m 50ndash83 Independentx3 Programmed length 120583m 0ndash55 Independentx4 Programmed volume 10

3times 120583m3 718ndash726 43120587119909

3

2+ 12119909

2

21199093

x5 Intensity 60ndash100 Independentx6 Frequency KHz 30ndash60 Independentx7 Speed mms 200ndash600 Independentx8 Time s 9ndash24 Independentx9 Programmed MRR 10

3times 120583m3

s 30ndash81 11990941199098

Table 5 Output variables

Variable Units Range Relationshipy1 Measured volume 10

3times 120583m3 130ndash1701 Independent

y2 Measured depth 120583m 25ndash23060 Independenty3 Measured diameter 120583m 11850ndash20880 Independenty4 Measured length 120583m 0ndash7020 Independenty5 Measured MRR 10

3times 120583m3

s 12ndash121 11991011199098

y6 Volume error 103times 120583m3 minus980ndash596 119909

4minus 1199101

y7 Depth error 120583m minus18060ndash5690 1199091minus 1199092

y8 Width error 120583m minus6280ndashminus1663 21199092minus 1199103

y9 Length error 120583m minus2550ndash20880 1199093minus 1199104

y10 MRR error 103times 120583m3s minus61ndash66 119909

9minus 1199105

y11 Volume relative error Dimensionless minus136ndash082 11991061199094

y12 Depth relative error Dimensionless minus361ndash063 11991071199091

y13 Width relative error Dimensionless minus055ndash010 1199108(21199092)

y14 Length relative error Dimensionless minus071ndash015 11991091199093

output by a regressor It is expressed as the square root ofthe mean of the squares of the deviations as shown in thefollowing equation

RMSE =radic

sum119899

119905=1(119910119905minus 119891 (119909

119905))2

119899

(2)

We tested a wide range of the main families of state-of-the-art regression techniques as follows

(i) Function-based regressors we used two of themost popular algorithms Support Vector Regres-sion (SVR) [37] and ANNs [38] and also LinearRegression [39] which has an easier formulation thatallows direct physical interpretation of the modelsWe should note thewidespread use of SVR [40] whileANNshave been successfully applied to a great varietyof industrial modeling problems [41ndash43]

(ii) Instance-based methods specifically their most rep-resentative regressor k-nearest neighbors regressor[44] in this type of algorithm it is not necessaryto express an analytic relationship between the inputvariables and the output that is modeled an aspectthat makes this approach totally different from theother Instead of using an explicit formulation to

obtain a prediction it is calculated from set valuesstored in the training phase

(iii) Decision-tree-based regressors we have includedthese kinds of methods because they are used in theensembles as regressors as explained in Section 32

(iv) Ensemble techniques [45] are among the most pop-ular in the literature These methods have beensuccessfully applied to a wide variety of industrialproblems [34 46ndash49]

31 Linear Regression One of the most natural and simplestways of expressing relations between a set of inputs and anoutput is by using a linear function In this type of regressorthe variable to forecast is given by a linear combination of theattributes with predeterminedweights [39] as detailed in thefollowing equation

119910(119894)

estimated =

119896

sum

119895=0

(119908119895times 119909(119894)

119895) (3)

where 119910(119894)estimated denotes the output of the 119894th training instanceand 119909

(119894)

119895the jth attribute of the 119894th instance The sum of the

squares of the differences between real and forecasted output

Journal of Applied Mathematics 5

times1

times7

times9 times7

times6

times8

times7

times5

81833478 60174006 41546599

36805988 22336725 40669213

24656857

401199 29818975

lt60

lt500

lt300

lt69

ge60

ge500

lt500 ge500

ge300ge517955

ge375

ge69

lt11 ge11

lt375

lt517955

Figure 2 Model of the measured volume with a regression tree

is minimized to calculate the most adequate weights 119908119895

following the expression given in the following equation

119899

sum

119894=0

[

[

119910(119894)

minus

119896

sum

119895=0

(119908119895times 119909(119894)

119895)]

]

(4)

We used an improvement to this original formulation byselecting the attributes with the Akaike information criterion(AIC) [50] In (5) we can see how the AIC is defined wherek is the number of free parameters of the model (ie thenumber of input variables considered) and119875 is the probabilityof the estimation fitting the training dataThe aim is to obtainmodels that fit the training data but with as few parameters aspossible

AIC = minus2 ln (119875) + 2119896 (5)

32 Decision-Tree-Based Regressors The decision tree is adata-mining technique that builds hierarchical models thatare easily interpretable because they may be representedin graphical form as shown in Figures 2 and 3 with anexample of the output measured length as a function of theinput attributes In this type of model all the decisions areorganised around a single variable resulting in the final hier-archical structure This representation has three elementsthe nodes attributes taken for the decision (ellipses in therepresentation) the leaves final forecasted values (squares)

and these two elements being connected by arcs with thesplitting values for each attribute

As base regressors we have two types of regressors thestructure of which is based on decision trees regression treesand model trees Both families of algorithms are hierarchicalmodels represented by an abstract tree but they differ withregard to what their leaves store [51] In the case of regressiontrees a value is stored that represents the average value ofthe instances enclosed by the leaf while the model trees havea linear regression model that predicts the output value forthese instances The intrasubset variation in the class valuesdown each branch is minimized to build the initial tree [52]

In our experimentation we used one representativeimplementation of the two families reduced-error pruningtree (REPTree) [20] a regression tree and M5P [51] a modeltree In both cases we tested two configurations prunedand unpruned trees In the case of having one single treeas a regressor for some typologies of ensembles it is moreappropriate to prune the trees to avoid overfitting the trainingdata while some ensembles can take advantage of havingunpruned trees [53]

33 Ensemble Regressors An ensemble regressor combinesthe predictions of a set of so-called base regressors using avoting system [54] as we can see in Figure 4 Probably thethree most popular ensemble techniques are Bagging [55]Boosting [56] and Random Subspaces [53] For Bagging and

6 Journal of Applied Mathematics

times1

times5LM 1

LM 3LM 2

le60 gt60

le69

gt69

minus20796469lowastx1

y1 =

y1 = y1 =

+70976312lowastx2minus2658774lowastx3+5989864lowastx5minus5705179lowastx7+5011662636

minus11649482lowastx1+30870216lowastx2+2546212lowastx3+9627808lowastx5minus2879248lowastx7minus05008lowastx9+3306695834

minus11649482lowastx1+16528734lowastx3+7034302lowastx5minus539148lowastx7minus1497lowastx9+5231650737

Figure 3 Model of the measured volume with a model tree

Boosting the ensemble regressor is formed from a set ofweak base regressors trained by applying the same learningalgorithm to different sets obtained from the training setIn Bagging each base regressor is trained with a datasetobtained from random sampling with replacement [55] (iea particular instance may appear repeated several times ormay not be considered in any base regressor) As a resultthe base regressors are independent However Boosting usesall the instances and a set of weights to train each baseregressor Each instance has a weight pointing out howimportant it is to predict that instance correctly Some baseregressors can take weights into account (eg decision trees)Boosting trains base regressors sequentially because errorsfor training instances in the previous base regressor areused for reweighting The new base regressors are focusedon instances that previous base regressors have wronglypredictedThe voting system for Boosting is also weighted bythe accuracy of each base regressor [57]

Random Subspaces follow a different approach eachbase regressor is trained in a subset of fewer dimensionsthan the original space This subset of features is randomlychosen for all regressors This procedure is followed with theintention of avoiding the well-known problem of the curse ofdimensionality that occurs with many regressors when thereare many features as well as improving accuracy by choosing

base regressors with low correlations between them In totalwe have used five ensemble regression techniques of the stateof the art for regression two variants of Bagging one variantof Boosting and Random Subspaces The list of ensemblemethods used in the experimentation is enumerated below

(i) Bagging is in its initial formulation for regression(ii) Iterated Bagging combines several Bagging ensem-

bles the first one keeping to a typical construction andthe others using residuals (differences between thereal and the predicted values) for training purposes[58]

(iii) Random Subspaces are in their formulation forregression

(iv) AdaBoostR2 [59] is a boosting implementation forregression Calculated from the absolute errors ofeach training example 119897(119894) = |119891

119877(119909119894) minus 119910119894| the so-

called loss function 119871(119894) was used to estimate theerror of each base regressor and to assign a suitableweight to each one Let Den be the maximum valueof 119897(119894) in the training set then three different lossfunctions are used linear 119871

119897(119894) = 119897(119894)Den square

119871119878(119894) = [119897(119894)Den]2 and exponential 119871

119864(119894) = 1 minus

exp(minus119897(119894)Den)(v) Additive regression is this regressor that has a learn-

ing algorithm called Stochastic Gradient Boosting[60] which is a modification of Adaptive Bagging ahybrid Bagging Boosting procedure intended for leastsquares fitting on additive expansions [60]

34 k-Nearest Neighbor Regressor This regressor is the mostrepresentative algorithm among the instance-based learningThese kinds ofmethods forecast the output value using storedvalues of the most similar instances of the training data [61]The estimation is the mean of the k most similar traininginstances Two configuration decisions have to be taken

(i) how many nearest neighbors to use to forecast thevalue of a new instance

(ii) which distance function to use to measure the simi-larity between the instances

In our experimentation we have used the most commondefinition of the distance function the Euclidean distancewhile the number of neighbors is optimized using cross-validation

35 Support Vector Regressor This kind of regressor is basedon a parametric function the parameters of which areoptimized during the training process in order to minimizethe RMSE [62] Mathematically the goal is to find a function119891(119909) that has the most deviation 120598 from the targets thatare actually obtained 119910

119894 for all the training data and at the

same time is as flat as possible The following equation is anexample of SVR with the particular case of a linear functioncalled linear SVR where ⟨sdot sdot⟩ denotes the inner product inthe input space119883

119891 (119909) = ⟨119908 119909⟩ + 119887 with 119908 isin 119883 119887 isin 119877 (6)

Journal of Applied Mathematics 7

Final prediction

Voting systemsum

Base regressor 1 Base regressor 2 Base regressor n

y = f(x)

X

X

X X

Y

Y Y Y

X1 Y1

Xm Ym

middot middot middot middot middot middot middot middot middot middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

Leve

l2

outp

utLe

vel0

or

igin

al d

ata

Leve

l1

base

regr

esso

rs

Figure 4 Ensemble regressor architecture

The norms of 119908 have to be minimized to find a flat func-tion but in real data solving this optimization problem can beunfeasible In consequence Boser et al [37] introduced threeterms into the formulation the slack variables 120585 120585lowast and C atrade-off parameter between the flatness and the deviationsof the errors larger than 120598 In the following equation theoptimization problem is shown that is associated with a linearSVR

minimize 1

21199082+ 119862

119897

sum

119894=1

(120585119894+ 120585lowast

119894)

st 119910119894minus ⟨119908 119909

119894⟩ minus 119887 le 120598 + 120585

119894

⟨119908 119909119894⟩ + 119887 minus 119910

119894le 120598 + 120585

lowast

119894

120585119894 120585lowast

119894ge 0

(7)

We have an optimization problem of the convex typethat is solved in practice using the Lagrange method Theequations are rewritten using the primal objective functionand the corresponding constraints a process in which the so-called dual problem (see (8)) is obtained as follows

maximize minus1

2

119897

sum

119894119895=1

(120572119894minus 120572lowast

119894) (120572119895minus 120572lowast

119895) ⟨119909119894 119909119895⟩

minus 120598

119897

sum

119894=1

(120572119894+ 120572lowast

119894) +

119897

sum

119894=1

119910119894(120572119894minus 120572lowast

119894)

st119897

sum

119894=1

(120572119894minus 120572lowast

119894) = 0

120572119894 120572lowast

119894isin [0 119862]

(8)

From the expression in (8) it is possible to generalizethe formulation of the SVR in terms of a nonlinear function

Instead of calculating the inner products in the originalfeature space a kernel function 119896(119909 1199091015840) that computes theinner product in a transformed spacewas definedThis kernelfunction has to satisfy the so-called Mercerrsquos conditions [63]In our experimentation we have used the two most popularkernels in the literature [40] linear and radial basis

36 Artificial Neural Networks We used the multilayer per-ceptron (MLP) the most popular ANN variant [64] in ourexperimental work It has been demonstrated to be a univer-sal approximator of functions [65] ANNs are a particular caseof neural networks the mathematical formulation of whichis inspired by biological functions as they aim to emulatethe behavior of a set of neurons [64] This network has threelayers [66] one with the network inputs (features of the dataset) a hidden layer and an output layer where the predictionis assigned to each input instance Firstly the output of thehidden layer 119910hide is calculated from the inputs and then theoutput 119910hide is obtained according to the expressions shownin the following equation [66]

119910hide = 119891net (1198821119909 + 1198611)

119910output = 119891output (1198822119910hide + 1198612)

(9)

where 1198821is the weight matrix of the hidden layer 119861

1is the

bias of the hidden layer1198822is the weight matrix of the output

layer (eg the identitymatrix in the configurations tested)1198612

is the bias of the output layer 119891net is the activation functionof the hidden layer and 119891output is the activation function ofthe output layer These two functions depend on the chosenstructure but are typically the identity for the hidden layer and119905119886119899119904119894119892 for the output layer [66]

8 Journal of Applied Mathematics

Table 6 Methods notation

Bagging BGIterated Bagging IBRandom Subspaces RSAdaboostR2 R2Additive Regression ARREPTree RPM5P Model Tree M5PSupport Vector Regressor SVRMultilayer Perceptron MLP119896-Nearest Neighbor Regressor 119896NNLinear Regression LR

4 Results and Discussion

We compared the RMSE obtained for the regressors ina 10 times 10 cross-validation in order to choose the method thatis most suited to model this industrial problem The experi-ments were completed using theWEKA [20] implementationof the methods described above

All the ensembles under consideration have 100 baseregressors The methods that depend on a set of parametersare optimized as follows

(i) SVR with linear kernel the trade-off parameter 119862 inthe range 2ndash8

(ii) SVR with radial basis kernel 119862 from 1 to 16 and theparameter of the radial basis function gamma from10minus5 to 10

minus2(iii) multilayer perceptron the training parameters

momentum learning rate and number of neurons areoptimized in the ranges 01ndash04 01ndash06 and 5ndash15

(iv) kNN the number of neighbours is optimized from 1to 10

The notation used to describe the methods is detailed inTable 6

Regarding the notation two abbreviations have been usedbesides those that are indicated in Table 6 On the one handwe used the suffixes ldquoLrdquo ldquoSrdquo and ldquoErdquo for the linear square andexponential loss functions of AdaboostR2 and on the otherhand the trees that are either pruned (119875) or unpruned (119880)

appear between brackets Tables 7 8 and 9 set out the RMSEof each of the 14 outputs for each method

Finally a summary tablewith the bestmethods per outputis shown The indexes of the 39 methods that were tested areexplained in Tables 10 and 11 and in Table 12 the methodwith minimal RMSE is indicated according to the notationfor these indexes In the third column we also indicatethose methods that have larger RMSE but using a correctedresampled t-test [67] the differences are not statistically sig-nificative at a confidence level of 95 Analyzing the secondcolumn of Table 12 among the 39 configurations testedonly 4 methods obtained the best RMSE for one of the 14outputs AdaboostR2 with exponential loss and prunedM5Pas base regressorsmdashindex 26mdash(5 times) SVR with radial

basis function kernelmdashindex 6mdash(4 times) Iterated Baggingwith unprunedM5P as base regressorsmdashindex 15mdash(4 times)and Bagging with unpruned RP as the base regressor - index9mdash(1 time)

The performance of eachmethodmay be ranked Table 13presents the number of significative weaker performances ofeach method considering the 14 outputs modeled The mostrobust method is Iterated Bagging with unpruned M5P asbase regressors - index 15 - as in none of the 14 outputs wasit outperformed by other methods Besides selecting the bestmethod it is possible to obtain some additional conclusionsfrom this ranking table

(i) Linear models like SVR linearmdashindex 5mdash and LRmdashindex 3mdashdo not fit the datasets in the study very wellBothmethods are ranked together at themiddle of thetableThe predicted variables therefore needmethodsthat can operate with nonlinearities

(ii) SVR with Radial Basis Function Kernelmdashindex 6mdashis the only nonensemble method with competitiveresults but it needs to tune 2 parametersMLPmdashindex7mdashis not a good choice It needs to tune 3 parametersand is not a well-ranked method

(iii) For some ensemble configurations there are dif-ferences in the number of statistically significativeperformances between the results from pruned andunpruned trees while in other cases these differ-ences do not exist in general though the resultsof unpruned trees are more accurate specially withthe top-ranked methods In fact the only regressorwhich is not outperformed by other methods inany output has unpruned trees Unpruned trees aremore sensitive to changes in the training set Sothe predictions of unpruned trees when their baseregressors are trained in an ensemble are more likelyto output diverse predictions If the predictions of allbase regressors agreed there would be little benefit inusing ensembles Diversity balances faulty predictionsby some base regressors with correct predictions byothers

(iv) The top-ranked ensembles use the most accurate baseregressor (ie M5P) All M5P configurations havefewer weaker performances than the correspondingRP configuration In particular the lowest rank wasassigned to theAR -RP configurationswhileARM5PU came second best

(v) Ensembles that lose a lot of information such as RSare ranked at the bottom of the table The table showsthat the lower the percentage of features RS use theworse they perform In comparison to other ensemblemethods RS is a method that is very insensitive tonoise so it can point to data that are not noisy

(vi) In R2 M5P ensembles the loss function does notappear to be an important configuration parameter

(vii) IB M5P U is the only configuration that never hadsignificant losses when compared with the othermethods

Journal of Applied Mathematics 9

Table 7 Root mean squared error 13

Volume Depth Width Length MRRRP 21648227 2689 581 579 1716673M5P 21077329 2301 451 553 1607633LR 19752189 2384 704 1473 1501593119896NN 19375662 2357 542 684 1584242SVR linear 1978945 2419 707 1623 1480032SVR radial basis 20077424 1885 438 535 1460337MLP 20764699 2298 47 539 1629243BG RP P 20021292 2149 497 508 1552679BG RP U 2052908 1998 466 49 1560389BG M5P P 20063657 2061 443 527 1529392BG M5P U 19754675 195 431 538 150221IB RP P 20276785 2205 487 514 1567757IB RP U 21938893 217 501 511 1591196IB M5P P 19783383 208 442 488 1531801IB M5P U 19515416 1965 43 478 1483083R2-L RP P 19176523 2227 494 507 1578847R2-L RP U 20636982 2171 525 529 1700431R2-L M5P P 18684392 2081 437 484 1503137R2-L M5P U 1815874 2051 434 484 1520901R2-S RP P 19390839 2303 51 518 1600743R2-S RP U 20045321 2121 511 531 1640124R2-S M5P P 17311772 2083 449 471 1507005R2-S M5P U 17391436 2087 452 472 1524581R2-E RP P 19252931 2259 502 507 1585403R2-E RP U 20592086 2144 522 521 1731845R2-E M5P P 17105622 2125 453 466 1507801R2-E M5P U 17294895 2132 455 467 1509023AR RP P 21575039 2551 542 556 1724982AR RP U 27446743 2471 583 615 1862828AR M5P P 20880584 2227 444 508 160765AR M5P U 19457279 1958 451 494 1553829RS 50 RP P 20052636 2658 568 774 1585427RS 50 RP U 20194624 2533 535 754 1566928RS 50 M5P P 20101327 266 538 1542 1540277RS 50 M5P U 19916684 2565 539 1545 1534984RS 75 RP P 19927003 2333 527 598 1586158RS 75 RP U 20784567 2161 504 625 1625164RS 75 M5P P 19964896 2355 465 829 1522787RS 75 M5P U 19742035 2211 46 839 1529571

Once the best data-mining technique for this industrialtask is identified the industrial implementation of theseresults can follow the procedure outlined below

(1) The best model is run to predict one output bychanging two input variables of the process in smallsteps and maintaining a fixed value for the otherinputs

(2) 3D plots of the output related to the two varied inputsshould be generatedThe process engineer can extractinformation from these 3D plots on the best millingconditions

As an example of this methodology the following casewas built Two Iterated Bagging ensembles with unprunedM5P as their base regressors are built for two outputs thewidth and the depth errors respectively Then the modelswere run by varying two inputs in small steps across thetest range pulse intensity (PI) and scanning speed (SS) Thiscombination of inputs and outputs presents an immediateinterest from the industrial point of view because in aworkshop the DES geometry is fixed by the customer andthe process engineer can only change three parameters of thelaser milling process (pulse frequency (PF) scanning speed

10 Journal of Applied Mathematics

Table 8 Root mean squared error 23

Volume error Depth error Width error Length error MRR errorRP 21687574 3066 591 562 1635557M5P 21450091 198 619 509 1600984LR 19752193 2392 676 1467 1496309119896NN 19369636 2369 498 66 1463651SVR linear 19781788 2417 71 1623 1478477SVR radial basis 20078596 1898 442 537 1450461MLP 20675361 2196 493 537 1738269BG RP P 20103757 2318 462 498 1512257BG RP U 20534153 216 44 487 153272BG M5P P 20059431 1966 532 508 151625BG M5P U 19757549 1919 522 515 1486054IB RP P 207848 2305 478 518 1552556IB RP U 21663114 2379 479 52 1653231IB M5P P 20145065 1973 444 471 1517767IB M5P U 19826122 1944 433 458 1493358R2-L RP P 2013655 2408 464 503 1544266R2-L RP U 20850097 2395 487 538 167783R2-L M5P P 1847995 2061 468 477 147324R2-L M5P U 18374085 2071 47 481 1481765R2-S RP P 19559243 2475 465 524 1610762R2-S RP U 20101769 2287 471 539 1587167R2-S M5P P 17277515 2109 453 472 1445998R2-S M5P U 17389235 2088 452 474 1449738R2-E RP P 19565769 2426 462 512 1564502R2-E RP U 20627571 2438 482 535 1672126R2-E M5P P 17219689 2161 457 475 1432448R2-E M5P U 17335689 2161 458 479 1437136AR RP P 2149783 2777 549 543 1620807AR RP U 27192655 2743 529 61 2057127AR M5P P 21132963 198 455 466 1599584AR M5P U 1954975 1955 478 482 1544838RS 50 RP P 20077517 2854 541 693 1521765RS 50 RP U 2023542 2684 524 694 1507478RS 50 M5P P 20105445 2591 598 1181 1538734RS 50 M5P U 1992461 2561 579 1175 1494089RS 75 RP P 19950189 2586 49 573 1516501RS 75 RP U 20646756 245 475 606 157435RS 75 M5P P 20016047 2205 576 666 151852RS 75 M5P U 19743178 2179 553 675 1472975

and pulse intensity) In viewof these restrictions the engineerwill wish to know the expected errors for the DES geometrydepending on the laser parameters that can be changedThe rest of the inputs for the models (DES geometry) arefixed at 70 120583m depth 65 120583m width and 0 120583m length and

PF is fixed at 45KHz Figure 5 shows the 3D plots obtainedfrom these calculations allowing the process engineer tochoose the following milling conditions SS in the range of325ndash470mms and PI in the range of 74ndash91 for minimumerrors in DES depth and width

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

4 Journal of Applied Mathematics

Table 4 Input variables

Variable Units Range Relationshipx1 Programmed depth 120583m 50ndash90 Independentx2 Programmed radio 120583m 50ndash83 Independentx3 Programmed length 120583m 0ndash55 Independentx4 Programmed volume 10

3times 120583m3 718ndash726 43120587119909

3

2+ 12119909

2

21199093

x5 Intensity 60ndash100 Independentx6 Frequency KHz 30ndash60 Independentx7 Speed mms 200ndash600 Independentx8 Time s 9ndash24 Independentx9 Programmed MRR 10

3times 120583m3

s 30ndash81 11990941199098

Table 5 Output variables

Variable Units Range Relationshipy1 Measured volume 10

3times 120583m3 130ndash1701 Independent

y2 Measured depth 120583m 25ndash23060 Independenty3 Measured diameter 120583m 11850ndash20880 Independenty4 Measured length 120583m 0ndash7020 Independenty5 Measured MRR 10

3times 120583m3

s 12ndash121 11991011199098

y6 Volume error 103times 120583m3 minus980ndash596 119909

4minus 1199101

y7 Depth error 120583m minus18060ndash5690 1199091minus 1199092

y8 Width error 120583m minus6280ndashminus1663 21199092minus 1199103

y9 Length error 120583m minus2550ndash20880 1199093minus 1199104

y10 MRR error 103times 120583m3s minus61ndash66 119909

9minus 1199105

y11 Volume relative error Dimensionless minus136ndash082 11991061199094

y12 Depth relative error Dimensionless minus361ndash063 11991071199091

y13 Width relative error Dimensionless minus055ndash010 1199108(21199092)

y14 Length relative error Dimensionless minus071ndash015 11991091199093

output by a regressor It is expressed as the square root ofthe mean of the squares of the deviations as shown in thefollowing equation

RMSE =radic

sum119899

119905=1(119910119905minus 119891 (119909

119905))2

119899

(2)

We tested a wide range of the main families of state-of-the-art regression techniques as follows

(i) Function-based regressors we used two of themost popular algorithms Support Vector Regres-sion (SVR) [37] and ANNs [38] and also LinearRegression [39] which has an easier formulation thatallows direct physical interpretation of the modelsWe should note thewidespread use of SVR [40] whileANNshave been successfully applied to a great varietyof industrial modeling problems [41ndash43]

(ii) Instance-based methods specifically their most rep-resentative regressor k-nearest neighbors regressor[44] in this type of algorithm it is not necessaryto express an analytic relationship between the inputvariables and the output that is modeled an aspectthat makes this approach totally different from theother Instead of using an explicit formulation to

obtain a prediction it is calculated from set valuesstored in the training phase

(iii) Decision-tree-based regressors we have includedthese kinds of methods because they are used in theensembles as regressors as explained in Section 32

(iv) Ensemble techniques [45] are among the most pop-ular in the literature These methods have beensuccessfully applied to a wide variety of industrialproblems [34 46ndash49]

31 Linear Regression One of the most natural and simplestways of expressing relations between a set of inputs and anoutput is by using a linear function In this type of regressorthe variable to forecast is given by a linear combination of theattributes with predeterminedweights [39] as detailed in thefollowing equation

119910(119894)

estimated =

119896

sum

119895=0

(119908119895times 119909(119894)

119895) (3)

where 119910(119894)estimated denotes the output of the 119894th training instanceand 119909

(119894)

119895the jth attribute of the 119894th instance The sum of the

squares of the differences between real and forecasted output

Journal of Applied Mathematics 5

times1

times7

times9 times7

times6

times8

times7

times5

81833478 60174006 41546599

36805988 22336725 40669213

24656857

401199 29818975

lt60

lt500

lt300

lt69

ge60

ge500

lt500 ge500

ge300ge517955

ge375

ge69

lt11 ge11

lt375

lt517955

Figure 2 Model of the measured volume with a regression tree

is minimized to calculate the most adequate weights 119908119895

following the expression given in the following equation

119899

sum

119894=0

[

[

119910(119894)

minus

119896

sum

119895=0

(119908119895times 119909(119894)

119895)]

]

(4)

We used an improvement to this original formulation byselecting the attributes with the Akaike information criterion(AIC) [50] In (5) we can see how the AIC is defined wherek is the number of free parameters of the model (ie thenumber of input variables considered) and119875 is the probabilityof the estimation fitting the training dataThe aim is to obtainmodels that fit the training data but with as few parameters aspossible

AIC = minus2 ln (119875) + 2119896 (5)

32 Decision-Tree-Based Regressors The decision tree is adata-mining technique that builds hierarchical models thatare easily interpretable because they may be representedin graphical form as shown in Figures 2 and 3 with anexample of the output measured length as a function of theinput attributes In this type of model all the decisions areorganised around a single variable resulting in the final hier-archical structure This representation has three elementsthe nodes attributes taken for the decision (ellipses in therepresentation) the leaves final forecasted values (squares)

and these two elements being connected by arcs with thesplitting values for each attribute

As base regressors we have two types of regressors thestructure of which is based on decision trees regression treesand model trees Both families of algorithms are hierarchicalmodels represented by an abstract tree but they differ withregard to what their leaves store [51] In the case of regressiontrees a value is stored that represents the average value ofthe instances enclosed by the leaf while the model trees havea linear regression model that predicts the output value forthese instances The intrasubset variation in the class valuesdown each branch is minimized to build the initial tree [52]

In our experimentation we used one representativeimplementation of the two families reduced-error pruningtree (REPTree) [20] a regression tree and M5P [51] a modeltree In both cases we tested two configurations prunedand unpruned trees In the case of having one single treeas a regressor for some typologies of ensembles it is moreappropriate to prune the trees to avoid overfitting the trainingdata while some ensembles can take advantage of havingunpruned trees [53]

33 Ensemble Regressors An ensemble regressor combinesthe predictions of a set of so-called base regressors using avoting system [54] as we can see in Figure 4 Probably thethree most popular ensemble techniques are Bagging [55]Boosting [56] and Random Subspaces [53] For Bagging and

6 Journal of Applied Mathematics

times1

times5LM 1

LM 3LM 2

le60 gt60

le69

gt69

minus20796469lowastx1

y1 =

y1 = y1 =

+70976312lowastx2minus2658774lowastx3+5989864lowastx5minus5705179lowastx7+5011662636

minus11649482lowastx1+30870216lowastx2+2546212lowastx3+9627808lowastx5minus2879248lowastx7minus05008lowastx9+3306695834

minus11649482lowastx1+16528734lowastx3+7034302lowastx5minus539148lowastx7minus1497lowastx9+5231650737

Figure 3 Model of the measured volume with a model tree

Boosting the ensemble regressor is formed from a set ofweak base regressors trained by applying the same learningalgorithm to different sets obtained from the training setIn Bagging each base regressor is trained with a datasetobtained from random sampling with replacement [55] (iea particular instance may appear repeated several times ormay not be considered in any base regressor) As a resultthe base regressors are independent However Boosting usesall the instances and a set of weights to train each baseregressor Each instance has a weight pointing out howimportant it is to predict that instance correctly Some baseregressors can take weights into account (eg decision trees)Boosting trains base regressors sequentially because errorsfor training instances in the previous base regressor areused for reweighting The new base regressors are focusedon instances that previous base regressors have wronglypredictedThe voting system for Boosting is also weighted bythe accuracy of each base regressor [57]

Random Subspaces follow a different approach eachbase regressor is trained in a subset of fewer dimensionsthan the original space This subset of features is randomlychosen for all regressors This procedure is followed with theintention of avoiding the well-known problem of the curse ofdimensionality that occurs with many regressors when thereare many features as well as improving accuracy by choosing

base regressors with low correlations between them In totalwe have used five ensemble regression techniques of the stateof the art for regression two variants of Bagging one variantof Boosting and Random Subspaces The list of ensemblemethods used in the experimentation is enumerated below

(i) Bagging is in its initial formulation for regression(ii) Iterated Bagging combines several Bagging ensem-

bles the first one keeping to a typical construction andthe others using residuals (differences between thereal and the predicted values) for training purposes[58]

(iii) Random Subspaces are in their formulation forregression

(iv) AdaBoostR2 [59] is a boosting implementation forregression Calculated from the absolute errors ofeach training example 119897(119894) = |119891

119877(119909119894) minus 119910119894| the so-

called loss function 119871(119894) was used to estimate theerror of each base regressor and to assign a suitableweight to each one Let Den be the maximum valueof 119897(119894) in the training set then three different lossfunctions are used linear 119871

119897(119894) = 119897(119894)Den square

119871119878(119894) = [119897(119894)Den]2 and exponential 119871

119864(119894) = 1 minus

exp(minus119897(119894)Den)(v) Additive regression is this regressor that has a learn-

ing algorithm called Stochastic Gradient Boosting[60] which is a modification of Adaptive Bagging ahybrid Bagging Boosting procedure intended for leastsquares fitting on additive expansions [60]

34 k-Nearest Neighbor Regressor This regressor is the mostrepresentative algorithm among the instance-based learningThese kinds ofmethods forecast the output value using storedvalues of the most similar instances of the training data [61]The estimation is the mean of the k most similar traininginstances Two configuration decisions have to be taken

(i) how many nearest neighbors to use to forecast thevalue of a new instance

(ii) which distance function to use to measure the simi-larity between the instances

In our experimentation we have used the most commondefinition of the distance function the Euclidean distancewhile the number of neighbors is optimized using cross-validation

35 Support Vector Regressor This kind of regressor is basedon a parametric function the parameters of which areoptimized during the training process in order to minimizethe RMSE [62] Mathematically the goal is to find a function119891(119909) that has the most deviation 120598 from the targets thatare actually obtained 119910

119894 for all the training data and at the

same time is as flat as possible The following equation is anexample of SVR with the particular case of a linear functioncalled linear SVR where ⟨sdot sdot⟩ denotes the inner product inthe input space119883

119891 (119909) = ⟨119908 119909⟩ + 119887 with 119908 isin 119883 119887 isin 119877 (6)

Journal of Applied Mathematics 7

Final prediction

Voting systemsum

Base regressor 1 Base regressor 2 Base regressor n

y = f(x)

X

X

X X

Y

Y Y Y

X1 Y1

Xm Ym

middot middot middot middot middot middot middot middot middot middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

Leve

l2

outp

utLe

vel0

or

igin

al d

ata

Leve

l1

base

regr

esso

rs

Figure 4 Ensemble regressor architecture

The norms of 119908 have to be minimized to find a flat func-tion but in real data solving this optimization problem can beunfeasible In consequence Boser et al [37] introduced threeterms into the formulation the slack variables 120585 120585lowast and C atrade-off parameter between the flatness and the deviationsof the errors larger than 120598 In the following equation theoptimization problem is shown that is associated with a linearSVR

minimize 1

21199082+ 119862

119897

sum

119894=1

(120585119894+ 120585lowast

119894)

st 119910119894minus ⟨119908 119909

119894⟩ minus 119887 le 120598 + 120585

119894

⟨119908 119909119894⟩ + 119887 minus 119910

119894le 120598 + 120585

lowast

119894

120585119894 120585lowast

119894ge 0

(7)

We have an optimization problem of the convex typethat is solved in practice using the Lagrange method Theequations are rewritten using the primal objective functionand the corresponding constraints a process in which the so-called dual problem (see (8)) is obtained as follows

maximize minus1

2

119897

sum

119894119895=1

(120572119894minus 120572lowast

119894) (120572119895minus 120572lowast

119895) ⟨119909119894 119909119895⟩

minus 120598

119897

sum

119894=1

(120572119894+ 120572lowast

119894) +

119897

sum

119894=1

119910119894(120572119894minus 120572lowast

119894)

st119897

sum

119894=1

(120572119894minus 120572lowast

119894) = 0

120572119894 120572lowast

119894isin [0 119862]

(8)

From the expression in (8) it is possible to generalizethe formulation of the SVR in terms of a nonlinear function

Instead of calculating the inner products in the originalfeature space a kernel function 119896(119909 1199091015840) that computes theinner product in a transformed spacewas definedThis kernelfunction has to satisfy the so-called Mercerrsquos conditions [63]In our experimentation we have used the two most popularkernels in the literature [40] linear and radial basis

36 Artificial Neural Networks We used the multilayer per-ceptron (MLP) the most popular ANN variant [64] in ourexperimental work It has been demonstrated to be a univer-sal approximator of functions [65] ANNs are a particular caseof neural networks the mathematical formulation of whichis inspired by biological functions as they aim to emulatethe behavior of a set of neurons [64] This network has threelayers [66] one with the network inputs (features of the dataset) a hidden layer and an output layer where the predictionis assigned to each input instance Firstly the output of thehidden layer 119910hide is calculated from the inputs and then theoutput 119910hide is obtained according to the expressions shownin the following equation [66]

119910hide = 119891net (1198821119909 + 1198611)

119910output = 119891output (1198822119910hide + 1198612)

(9)

where 1198821is the weight matrix of the hidden layer 119861

1is the

bias of the hidden layer1198822is the weight matrix of the output

layer (eg the identitymatrix in the configurations tested)1198612

is the bias of the output layer 119891net is the activation functionof the hidden layer and 119891output is the activation function ofthe output layer These two functions depend on the chosenstructure but are typically the identity for the hidden layer and119905119886119899119904119894119892 for the output layer [66]

8 Journal of Applied Mathematics

Table 6 Methods notation

Bagging BGIterated Bagging IBRandom Subspaces RSAdaboostR2 R2Additive Regression ARREPTree RPM5P Model Tree M5PSupport Vector Regressor SVRMultilayer Perceptron MLP119896-Nearest Neighbor Regressor 119896NNLinear Regression LR

4 Results and Discussion

We compared the RMSE obtained for the regressors ina 10 times 10 cross-validation in order to choose the method thatis most suited to model this industrial problem The experi-ments were completed using theWEKA [20] implementationof the methods described above

All the ensembles under consideration have 100 baseregressors The methods that depend on a set of parametersare optimized as follows

(i) SVR with linear kernel the trade-off parameter 119862 inthe range 2ndash8

(ii) SVR with radial basis kernel 119862 from 1 to 16 and theparameter of the radial basis function gamma from10minus5 to 10

minus2(iii) multilayer perceptron the training parameters

momentum learning rate and number of neurons areoptimized in the ranges 01ndash04 01ndash06 and 5ndash15

(iv) kNN the number of neighbours is optimized from 1to 10

The notation used to describe the methods is detailed inTable 6

Regarding the notation two abbreviations have been usedbesides those that are indicated in Table 6 On the one handwe used the suffixes ldquoLrdquo ldquoSrdquo and ldquoErdquo for the linear square andexponential loss functions of AdaboostR2 and on the otherhand the trees that are either pruned (119875) or unpruned (119880)

appear between brackets Tables 7 8 and 9 set out the RMSEof each of the 14 outputs for each method

Finally a summary tablewith the bestmethods per outputis shown The indexes of the 39 methods that were tested areexplained in Tables 10 and 11 and in Table 12 the methodwith minimal RMSE is indicated according to the notationfor these indexes In the third column we also indicatethose methods that have larger RMSE but using a correctedresampled t-test [67] the differences are not statistically sig-nificative at a confidence level of 95 Analyzing the secondcolumn of Table 12 among the 39 configurations testedonly 4 methods obtained the best RMSE for one of the 14outputs AdaboostR2 with exponential loss and prunedM5Pas base regressorsmdashindex 26mdash(5 times) SVR with radial

basis function kernelmdashindex 6mdash(4 times) Iterated Baggingwith unprunedM5P as base regressorsmdashindex 15mdash(4 times)and Bagging with unpruned RP as the base regressor - index9mdash(1 time)

The performance of eachmethodmay be ranked Table 13presents the number of significative weaker performances ofeach method considering the 14 outputs modeled The mostrobust method is Iterated Bagging with unpruned M5P asbase regressors - index 15 - as in none of the 14 outputs wasit outperformed by other methods Besides selecting the bestmethod it is possible to obtain some additional conclusionsfrom this ranking table

(i) Linear models like SVR linearmdashindex 5mdash and LRmdashindex 3mdashdo not fit the datasets in the study very wellBothmethods are ranked together at themiddle of thetableThe predicted variables therefore needmethodsthat can operate with nonlinearities

(ii) SVR with Radial Basis Function Kernelmdashindex 6mdashis the only nonensemble method with competitiveresults but it needs to tune 2 parametersMLPmdashindex7mdashis not a good choice It needs to tune 3 parametersand is not a well-ranked method

(iii) For some ensemble configurations there are dif-ferences in the number of statistically significativeperformances between the results from pruned andunpruned trees while in other cases these differ-ences do not exist in general though the resultsof unpruned trees are more accurate specially withthe top-ranked methods In fact the only regressorwhich is not outperformed by other methods inany output has unpruned trees Unpruned trees aremore sensitive to changes in the training set Sothe predictions of unpruned trees when their baseregressors are trained in an ensemble are more likelyto output diverse predictions If the predictions of allbase regressors agreed there would be little benefit inusing ensembles Diversity balances faulty predictionsby some base regressors with correct predictions byothers

(iv) The top-ranked ensembles use the most accurate baseregressor (ie M5P) All M5P configurations havefewer weaker performances than the correspondingRP configuration In particular the lowest rank wasassigned to theAR -RP configurationswhileARM5PU came second best

(v) Ensembles that lose a lot of information such as RSare ranked at the bottom of the table The table showsthat the lower the percentage of features RS use theworse they perform In comparison to other ensemblemethods RS is a method that is very insensitive tonoise so it can point to data that are not noisy

(vi) In R2 M5P ensembles the loss function does notappear to be an important configuration parameter

(vii) IB M5P U is the only configuration that never hadsignificant losses when compared with the othermethods

Journal of Applied Mathematics 9

Table 7 Root mean squared error 13

Volume Depth Width Length MRRRP 21648227 2689 581 579 1716673M5P 21077329 2301 451 553 1607633LR 19752189 2384 704 1473 1501593119896NN 19375662 2357 542 684 1584242SVR linear 1978945 2419 707 1623 1480032SVR radial basis 20077424 1885 438 535 1460337MLP 20764699 2298 47 539 1629243BG RP P 20021292 2149 497 508 1552679BG RP U 2052908 1998 466 49 1560389BG M5P P 20063657 2061 443 527 1529392BG M5P U 19754675 195 431 538 150221IB RP P 20276785 2205 487 514 1567757IB RP U 21938893 217 501 511 1591196IB M5P P 19783383 208 442 488 1531801IB M5P U 19515416 1965 43 478 1483083R2-L RP P 19176523 2227 494 507 1578847R2-L RP U 20636982 2171 525 529 1700431R2-L M5P P 18684392 2081 437 484 1503137R2-L M5P U 1815874 2051 434 484 1520901R2-S RP P 19390839 2303 51 518 1600743R2-S RP U 20045321 2121 511 531 1640124R2-S M5P P 17311772 2083 449 471 1507005R2-S M5P U 17391436 2087 452 472 1524581R2-E RP P 19252931 2259 502 507 1585403R2-E RP U 20592086 2144 522 521 1731845R2-E M5P P 17105622 2125 453 466 1507801R2-E M5P U 17294895 2132 455 467 1509023AR RP P 21575039 2551 542 556 1724982AR RP U 27446743 2471 583 615 1862828AR M5P P 20880584 2227 444 508 160765AR M5P U 19457279 1958 451 494 1553829RS 50 RP P 20052636 2658 568 774 1585427RS 50 RP U 20194624 2533 535 754 1566928RS 50 M5P P 20101327 266 538 1542 1540277RS 50 M5P U 19916684 2565 539 1545 1534984RS 75 RP P 19927003 2333 527 598 1586158RS 75 RP U 20784567 2161 504 625 1625164RS 75 M5P P 19964896 2355 465 829 1522787RS 75 M5P U 19742035 2211 46 839 1529571

Once the best data-mining technique for this industrialtask is identified the industrial implementation of theseresults can follow the procedure outlined below

(1) The best model is run to predict one output bychanging two input variables of the process in smallsteps and maintaining a fixed value for the otherinputs

(2) 3D plots of the output related to the two varied inputsshould be generatedThe process engineer can extractinformation from these 3D plots on the best millingconditions

As an example of this methodology the following casewas built Two Iterated Bagging ensembles with unprunedM5P as their base regressors are built for two outputs thewidth and the depth errors respectively Then the modelswere run by varying two inputs in small steps across thetest range pulse intensity (PI) and scanning speed (SS) Thiscombination of inputs and outputs presents an immediateinterest from the industrial point of view because in aworkshop the DES geometry is fixed by the customer andthe process engineer can only change three parameters of thelaser milling process (pulse frequency (PF) scanning speed

10 Journal of Applied Mathematics

Table 8 Root mean squared error 23

Volume error Depth error Width error Length error MRR errorRP 21687574 3066 591 562 1635557M5P 21450091 198 619 509 1600984LR 19752193 2392 676 1467 1496309119896NN 19369636 2369 498 66 1463651SVR linear 19781788 2417 71 1623 1478477SVR radial basis 20078596 1898 442 537 1450461MLP 20675361 2196 493 537 1738269BG RP P 20103757 2318 462 498 1512257BG RP U 20534153 216 44 487 153272BG M5P P 20059431 1966 532 508 151625BG M5P U 19757549 1919 522 515 1486054IB RP P 207848 2305 478 518 1552556IB RP U 21663114 2379 479 52 1653231IB M5P P 20145065 1973 444 471 1517767IB M5P U 19826122 1944 433 458 1493358R2-L RP P 2013655 2408 464 503 1544266R2-L RP U 20850097 2395 487 538 167783R2-L M5P P 1847995 2061 468 477 147324R2-L M5P U 18374085 2071 47 481 1481765R2-S RP P 19559243 2475 465 524 1610762R2-S RP U 20101769 2287 471 539 1587167R2-S M5P P 17277515 2109 453 472 1445998R2-S M5P U 17389235 2088 452 474 1449738R2-E RP P 19565769 2426 462 512 1564502R2-E RP U 20627571 2438 482 535 1672126R2-E M5P P 17219689 2161 457 475 1432448R2-E M5P U 17335689 2161 458 479 1437136AR RP P 2149783 2777 549 543 1620807AR RP U 27192655 2743 529 61 2057127AR M5P P 21132963 198 455 466 1599584AR M5P U 1954975 1955 478 482 1544838RS 50 RP P 20077517 2854 541 693 1521765RS 50 RP U 2023542 2684 524 694 1507478RS 50 M5P P 20105445 2591 598 1181 1538734RS 50 M5P U 1992461 2561 579 1175 1494089RS 75 RP P 19950189 2586 49 573 1516501RS 75 RP U 20646756 245 475 606 157435RS 75 M5P P 20016047 2205 576 666 151852RS 75 M5P U 19743178 2179 553 675 1472975

and pulse intensity) In viewof these restrictions the engineerwill wish to know the expected errors for the DES geometrydepending on the laser parameters that can be changedThe rest of the inputs for the models (DES geometry) arefixed at 70 120583m depth 65 120583m width and 0 120583m length and

PF is fixed at 45KHz Figure 5 shows the 3D plots obtainedfrom these calculations allowing the process engineer tochoose the following milling conditions SS in the range of325ndash470mms and PI in the range of 74ndash91 for minimumerrors in DES depth and width

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

Journal of Applied Mathematics 5

times1

times7

times9 times7

times6

times8

times7

times5

81833478 60174006 41546599

36805988 22336725 40669213

24656857

401199 29818975

lt60

lt500

lt300

lt69

ge60

ge500

lt500 ge500

ge300ge517955

ge375

ge69

lt11 ge11

lt375

lt517955

Figure 2 Model of the measured volume with a regression tree

is minimized to calculate the most adequate weights 119908119895

following the expression given in the following equation

119899

sum

119894=0

[

[

119910(119894)

minus

119896

sum

119895=0

(119908119895times 119909(119894)

119895)]

]

(4)

We used an improvement to this original formulation byselecting the attributes with the Akaike information criterion(AIC) [50] In (5) we can see how the AIC is defined wherek is the number of free parameters of the model (ie thenumber of input variables considered) and119875 is the probabilityof the estimation fitting the training dataThe aim is to obtainmodels that fit the training data but with as few parameters aspossible

AIC = minus2 ln (119875) + 2119896 (5)

32 Decision-Tree-Based Regressors The decision tree is adata-mining technique that builds hierarchical models thatare easily interpretable because they may be representedin graphical form as shown in Figures 2 and 3 with anexample of the output measured length as a function of theinput attributes In this type of model all the decisions areorganised around a single variable resulting in the final hier-archical structure This representation has three elementsthe nodes attributes taken for the decision (ellipses in therepresentation) the leaves final forecasted values (squares)

and these two elements being connected by arcs with thesplitting values for each attribute

As base regressors we have two types of regressors thestructure of which is based on decision trees regression treesand model trees Both families of algorithms are hierarchicalmodels represented by an abstract tree but they differ withregard to what their leaves store [51] In the case of regressiontrees a value is stored that represents the average value ofthe instances enclosed by the leaf while the model trees havea linear regression model that predicts the output value forthese instances The intrasubset variation in the class valuesdown each branch is minimized to build the initial tree [52]

In our experimentation we used one representativeimplementation of the two families reduced-error pruningtree (REPTree) [20] a regression tree and M5P [51] a modeltree In both cases we tested two configurations prunedand unpruned trees In the case of having one single treeas a regressor for some typologies of ensembles it is moreappropriate to prune the trees to avoid overfitting the trainingdata while some ensembles can take advantage of havingunpruned trees [53]

33 Ensemble Regressors An ensemble regressor combinesthe predictions of a set of so-called base regressors using avoting system [54] as we can see in Figure 4 Probably thethree most popular ensemble techniques are Bagging [55]Boosting [56] and Random Subspaces [53] For Bagging and

6 Journal of Applied Mathematics

times1

times5LM 1

LM 3LM 2

le60 gt60

le69

gt69

minus20796469lowastx1

y1 =

y1 = y1 =

+70976312lowastx2minus2658774lowastx3+5989864lowastx5minus5705179lowastx7+5011662636

minus11649482lowastx1+30870216lowastx2+2546212lowastx3+9627808lowastx5minus2879248lowastx7minus05008lowastx9+3306695834

minus11649482lowastx1+16528734lowastx3+7034302lowastx5minus539148lowastx7minus1497lowastx9+5231650737

Figure 3 Model of the measured volume with a model tree

Boosting the ensemble regressor is formed from a set ofweak base regressors trained by applying the same learningalgorithm to different sets obtained from the training setIn Bagging each base regressor is trained with a datasetobtained from random sampling with replacement [55] (iea particular instance may appear repeated several times ormay not be considered in any base regressor) As a resultthe base regressors are independent However Boosting usesall the instances and a set of weights to train each baseregressor Each instance has a weight pointing out howimportant it is to predict that instance correctly Some baseregressors can take weights into account (eg decision trees)Boosting trains base regressors sequentially because errorsfor training instances in the previous base regressor areused for reweighting The new base regressors are focusedon instances that previous base regressors have wronglypredictedThe voting system for Boosting is also weighted bythe accuracy of each base regressor [57]

Random Subspaces follow a different approach eachbase regressor is trained in a subset of fewer dimensionsthan the original space This subset of features is randomlychosen for all regressors This procedure is followed with theintention of avoiding the well-known problem of the curse ofdimensionality that occurs with many regressors when thereare many features as well as improving accuracy by choosing

base regressors with low correlations between them In totalwe have used five ensemble regression techniques of the stateof the art for regression two variants of Bagging one variantof Boosting and Random Subspaces The list of ensemblemethods used in the experimentation is enumerated below

(i) Bagging is in its initial formulation for regression(ii) Iterated Bagging combines several Bagging ensem-

bles the first one keeping to a typical construction andthe others using residuals (differences between thereal and the predicted values) for training purposes[58]

(iii) Random Subspaces are in their formulation forregression

(iv) AdaBoostR2 [59] is a boosting implementation forregression Calculated from the absolute errors ofeach training example 119897(119894) = |119891

119877(119909119894) minus 119910119894| the so-

called loss function 119871(119894) was used to estimate theerror of each base regressor and to assign a suitableweight to each one Let Den be the maximum valueof 119897(119894) in the training set then three different lossfunctions are used linear 119871

119897(119894) = 119897(119894)Den square

119871119878(119894) = [119897(119894)Den]2 and exponential 119871

119864(119894) = 1 minus

exp(minus119897(119894)Den)(v) Additive regression is this regressor that has a learn-

ing algorithm called Stochastic Gradient Boosting[60] which is a modification of Adaptive Bagging ahybrid Bagging Boosting procedure intended for leastsquares fitting on additive expansions [60]

34 k-Nearest Neighbor Regressor This regressor is the mostrepresentative algorithm among the instance-based learningThese kinds ofmethods forecast the output value using storedvalues of the most similar instances of the training data [61]The estimation is the mean of the k most similar traininginstances Two configuration decisions have to be taken

(i) how many nearest neighbors to use to forecast thevalue of a new instance

(ii) which distance function to use to measure the simi-larity between the instances

In our experimentation we have used the most commondefinition of the distance function the Euclidean distancewhile the number of neighbors is optimized using cross-validation

35 Support Vector Regressor This kind of regressor is basedon a parametric function the parameters of which areoptimized during the training process in order to minimizethe RMSE [62] Mathematically the goal is to find a function119891(119909) that has the most deviation 120598 from the targets thatare actually obtained 119910

119894 for all the training data and at the

same time is as flat as possible The following equation is anexample of SVR with the particular case of a linear functioncalled linear SVR where ⟨sdot sdot⟩ denotes the inner product inthe input space119883

119891 (119909) = ⟨119908 119909⟩ + 119887 with 119908 isin 119883 119887 isin 119877 (6)

Journal of Applied Mathematics 7

Final prediction

Voting systemsum

Base regressor 1 Base regressor 2 Base regressor n

y = f(x)

X

X

X X

Y

Y Y Y

X1 Y1

Xm Ym

middot middot middot middot middot middot middot middot middot middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

Leve

l2

outp

utLe

vel0

or

igin

al d

ata

Leve

l1

base

regr

esso

rs

Figure 4 Ensemble regressor architecture

The norms of 119908 have to be minimized to find a flat func-tion but in real data solving this optimization problem can beunfeasible In consequence Boser et al [37] introduced threeterms into the formulation the slack variables 120585 120585lowast and C atrade-off parameter between the flatness and the deviationsof the errors larger than 120598 In the following equation theoptimization problem is shown that is associated with a linearSVR

minimize 1

21199082+ 119862

119897

sum

119894=1

(120585119894+ 120585lowast

119894)

st 119910119894minus ⟨119908 119909

119894⟩ minus 119887 le 120598 + 120585

119894

⟨119908 119909119894⟩ + 119887 minus 119910

119894le 120598 + 120585

lowast

119894

120585119894 120585lowast

119894ge 0

(7)

We have an optimization problem of the convex typethat is solved in practice using the Lagrange method Theequations are rewritten using the primal objective functionand the corresponding constraints a process in which the so-called dual problem (see (8)) is obtained as follows

maximize minus1

2

119897

sum

119894119895=1

(120572119894minus 120572lowast

119894) (120572119895minus 120572lowast

119895) ⟨119909119894 119909119895⟩

minus 120598

119897

sum

119894=1

(120572119894+ 120572lowast

119894) +

119897

sum

119894=1

119910119894(120572119894minus 120572lowast

119894)

st119897

sum

119894=1

(120572119894minus 120572lowast

119894) = 0

120572119894 120572lowast

119894isin [0 119862]

(8)

From the expression in (8) it is possible to generalizethe formulation of the SVR in terms of a nonlinear function

Instead of calculating the inner products in the originalfeature space a kernel function 119896(119909 1199091015840) that computes theinner product in a transformed spacewas definedThis kernelfunction has to satisfy the so-called Mercerrsquos conditions [63]In our experimentation we have used the two most popularkernels in the literature [40] linear and radial basis

36 Artificial Neural Networks We used the multilayer per-ceptron (MLP) the most popular ANN variant [64] in ourexperimental work It has been demonstrated to be a univer-sal approximator of functions [65] ANNs are a particular caseof neural networks the mathematical formulation of whichis inspired by biological functions as they aim to emulatethe behavior of a set of neurons [64] This network has threelayers [66] one with the network inputs (features of the dataset) a hidden layer and an output layer where the predictionis assigned to each input instance Firstly the output of thehidden layer 119910hide is calculated from the inputs and then theoutput 119910hide is obtained according to the expressions shownin the following equation [66]

119910hide = 119891net (1198821119909 + 1198611)

119910output = 119891output (1198822119910hide + 1198612)

(9)

where 1198821is the weight matrix of the hidden layer 119861

1is the

bias of the hidden layer1198822is the weight matrix of the output

layer (eg the identitymatrix in the configurations tested)1198612

is the bias of the output layer 119891net is the activation functionof the hidden layer and 119891output is the activation function ofthe output layer These two functions depend on the chosenstructure but are typically the identity for the hidden layer and119905119886119899119904119894119892 for the output layer [66]

8 Journal of Applied Mathematics

Table 6 Methods notation

Bagging BGIterated Bagging IBRandom Subspaces RSAdaboostR2 R2Additive Regression ARREPTree RPM5P Model Tree M5PSupport Vector Regressor SVRMultilayer Perceptron MLP119896-Nearest Neighbor Regressor 119896NNLinear Regression LR

4 Results and Discussion

We compared the RMSE obtained for the regressors ina 10 times 10 cross-validation in order to choose the method thatis most suited to model this industrial problem The experi-ments were completed using theWEKA [20] implementationof the methods described above

All the ensembles under consideration have 100 baseregressors The methods that depend on a set of parametersare optimized as follows

(i) SVR with linear kernel the trade-off parameter 119862 inthe range 2ndash8

(ii) SVR with radial basis kernel 119862 from 1 to 16 and theparameter of the radial basis function gamma from10minus5 to 10

minus2(iii) multilayer perceptron the training parameters

momentum learning rate and number of neurons areoptimized in the ranges 01ndash04 01ndash06 and 5ndash15

(iv) kNN the number of neighbours is optimized from 1to 10

The notation used to describe the methods is detailed inTable 6

Regarding the notation two abbreviations have been usedbesides those that are indicated in Table 6 On the one handwe used the suffixes ldquoLrdquo ldquoSrdquo and ldquoErdquo for the linear square andexponential loss functions of AdaboostR2 and on the otherhand the trees that are either pruned (119875) or unpruned (119880)

appear between brackets Tables 7 8 and 9 set out the RMSEof each of the 14 outputs for each method

Finally a summary tablewith the bestmethods per outputis shown The indexes of the 39 methods that were tested areexplained in Tables 10 and 11 and in Table 12 the methodwith minimal RMSE is indicated according to the notationfor these indexes In the third column we also indicatethose methods that have larger RMSE but using a correctedresampled t-test [67] the differences are not statistically sig-nificative at a confidence level of 95 Analyzing the secondcolumn of Table 12 among the 39 configurations testedonly 4 methods obtained the best RMSE for one of the 14outputs AdaboostR2 with exponential loss and prunedM5Pas base regressorsmdashindex 26mdash(5 times) SVR with radial

basis function kernelmdashindex 6mdash(4 times) Iterated Baggingwith unprunedM5P as base regressorsmdashindex 15mdash(4 times)and Bagging with unpruned RP as the base regressor - index9mdash(1 time)

The performance of eachmethodmay be ranked Table 13presents the number of significative weaker performances ofeach method considering the 14 outputs modeled The mostrobust method is Iterated Bagging with unpruned M5P asbase regressors - index 15 - as in none of the 14 outputs wasit outperformed by other methods Besides selecting the bestmethod it is possible to obtain some additional conclusionsfrom this ranking table

(i) Linear models like SVR linearmdashindex 5mdash and LRmdashindex 3mdashdo not fit the datasets in the study very wellBothmethods are ranked together at themiddle of thetableThe predicted variables therefore needmethodsthat can operate with nonlinearities

(ii) SVR with Radial Basis Function Kernelmdashindex 6mdashis the only nonensemble method with competitiveresults but it needs to tune 2 parametersMLPmdashindex7mdashis not a good choice It needs to tune 3 parametersand is not a well-ranked method

(iii) For some ensemble configurations there are dif-ferences in the number of statistically significativeperformances between the results from pruned andunpruned trees while in other cases these differ-ences do not exist in general though the resultsof unpruned trees are more accurate specially withthe top-ranked methods In fact the only regressorwhich is not outperformed by other methods inany output has unpruned trees Unpruned trees aremore sensitive to changes in the training set Sothe predictions of unpruned trees when their baseregressors are trained in an ensemble are more likelyto output diverse predictions If the predictions of allbase regressors agreed there would be little benefit inusing ensembles Diversity balances faulty predictionsby some base regressors with correct predictions byothers

(iv) The top-ranked ensembles use the most accurate baseregressor (ie M5P) All M5P configurations havefewer weaker performances than the correspondingRP configuration In particular the lowest rank wasassigned to theAR -RP configurationswhileARM5PU came second best

(v) Ensembles that lose a lot of information such as RSare ranked at the bottom of the table The table showsthat the lower the percentage of features RS use theworse they perform In comparison to other ensemblemethods RS is a method that is very insensitive tonoise so it can point to data that are not noisy

(vi) In R2 M5P ensembles the loss function does notappear to be an important configuration parameter

(vii) IB M5P U is the only configuration that never hadsignificant losses when compared with the othermethods

Journal of Applied Mathematics 9

Table 7 Root mean squared error 13

Volume Depth Width Length MRRRP 21648227 2689 581 579 1716673M5P 21077329 2301 451 553 1607633LR 19752189 2384 704 1473 1501593119896NN 19375662 2357 542 684 1584242SVR linear 1978945 2419 707 1623 1480032SVR radial basis 20077424 1885 438 535 1460337MLP 20764699 2298 47 539 1629243BG RP P 20021292 2149 497 508 1552679BG RP U 2052908 1998 466 49 1560389BG M5P P 20063657 2061 443 527 1529392BG M5P U 19754675 195 431 538 150221IB RP P 20276785 2205 487 514 1567757IB RP U 21938893 217 501 511 1591196IB M5P P 19783383 208 442 488 1531801IB M5P U 19515416 1965 43 478 1483083R2-L RP P 19176523 2227 494 507 1578847R2-L RP U 20636982 2171 525 529 1700431R2-L M5P P 18684392 2081 437 484 1503137R2-L M5P U 1815874 2051 434 484 1520901R2-S RP P 19390839 2303 51 518 1600743R2-S RP U 20045321 2121 511 531 1640124R2-S M5P P 17311772 2083 449 471 1507005R2-S M5P U 17391436 2087 452 472 1524581R2-E RP P 19252931 2259 502 507 1585403R2-E RP U 20592086 2144 522 521 1731845R2-E M5P P 17105622 2125 453 466 1507801R2-E M5P U 17294895 2132 455 467 1509023AR RP P 21575039 2551 542 556 1724982AR RP U 27446743 2471 583 615 1862828AR M5P P 20880584 2227 444 508 160765AR M5P U 19457279 1958 451 494 1553829RS 50 RP P 20052636 2658 568 774 1585427RS 50 RP U 20194624 2533 535 754 1566928RS 50 M5P P 20101327 266 538 1542 1540277RS 50 M5P U 19916684 2565 539 1545 1534984RS 75 RP P 19927003 2333 527 598 1586158RS 75 RP U 20784567 2161 504 625 1625164RS 75 M5P P 19964896 2355 465 829 1522787RS 75 M5P U 19742035 2211 46 839 1529571

Once the best data-mining technique for this industrialtask is identified the industrial implementation of theseresults can follow the procedure outlined below

(1) The best model is run to predict one output bychanging two input variables of the process in smallsteps and maintaining a fixed value for the otherinputs

(2) 3D plots of the output related to the two varied inputsshould be generatedThe process engineer can extractinformation from these 3D plots on the best millingconditions

As an example of this methodology the following casewas built Two Iterated Bagging ensembles with unprunedM5P as their base regressors are built for two outputs thewidth and the depth errors respectively Then the modelswere run by varying two inputs in small steps across thetest range pulse intensity (PI) and scanning speed (SS) Thiscombination of inputs and outputs presents an immediateinterest from the industrial point of view because in aworkshop the DES geometry is fixed by the customer andthe process engineer can only change three parameters of thelaser milling process (pulse frequency (PF) scanning speed

10 Journal of Applied Mathematics

Table 8 Root mean squared error 23

Volume error Depth error Width error Length error MRR errorRP 21687574 3066 591 562 1635557M5P 21450091 198 619 509 1600984LR 19752193 2392 676 1467 1496309119896NN 19369636 2369 498 66 1463651SVR linear 19781788 2417 71 1623 1478477SVR radial basis 20078596 1898 442 537 1450461MLP 20675361 2196 493 537 1738269BG RP P 20103757 2318 462 498 1512257BG RP U 20534153 216 44 487 153272BG M5P P 20059431 1966 532 508 151625BG M5P U 19757549 1919 522 515 1486054IB RP P 207848 2305 478 518 1552556IB RP U 21663114 2379 479 52 1653231IB M5P P 20145065 1973 444 471 1517767IB M5P U 19826122 1944 433 458 1493358R2-L RP P 2013655 2408 464 503 1544266R2-L RP U 20850097 2395 487 538 167783R2-L M5P P 1847995 2061 468 477 147324R2-L M5P U 18374085 2071 47 481 1481765R2-S RP P 19559243 2475 465 524 1610762R2-S RP U 20101769 2287 471 539 1587167R2-S M5P P 17277515 2109 453 472 1445998R2-S M5P U 17389235 2088 452 474 1449738R2-E RP P 19565769 2426 462 512 1564502R2-E RP U 20627571 2438 482 535 1672126R2-E M5P P 17219689 2161 457 475 1432448R2-E M5P U 17335689 2161 458 479 1437136AR RP P 2149783 2777 549 543 1620807AR RP U 27192655 2743 529 61 2057127AR M5P P 21132963 198 455 466 1599584AR M5P U 1954975 1955 478 482 1544838RS 50 RP P 20077517 2854 541 693 1521765RS 50 RP U 2023542 2684 524 694 1507478RS 50 M5P P 20105445 2591 598 1181 1538734RS 50 M5P U 1992461 2561 579 1175 1494089RS 75 RP P 19950189 2586 49 573 1516501RS 75 RP U 20646756 245 475 606 157435RS 75 M5P P 20016047 2205 576 666 151852RS 75 M5P U 19743178 2179 553 675 1472975

and pulse intensity) In viewof these restrictions the engineerwill wish to know the expected errors for the DES geometrydepending on the laser parameters that can be changedThe rest of the inputs for the models (DES geometry) arefixed at 70 120583m depth 65 120583m width and 0 120583m length and

PF is fixed at 45KHz Figure 5 shows the 3D plots obtainedfrom these calculations allowing the process engineer tochoose the following milling conditions SS in the range of325ndash470mms and PI in the range of 74ndash91 for minimumerrors in DES depth and width

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

6 Journal of Applied Mathematics

times1

times5LM 1

LM 3LM 2

le60 gt60

le69

gt69

minus20796469lowastx1

y1 =

y1 = y1 =

+70976312lowastx2minus2658774lowastx3+5989864lowastx5minus5705179lowastx7+5011662636

minus11649482lowastx1+30870216lowastx2+2546212lowastx3+9627808lowastx5minus2879248lowastx7minus05008lowastx9+3306695834

minus11649482lowastx1+16528734lowastx3+7034302lowastx5minus539148lowastx7minus1497lowastx9+5231650737

Figure 3 Model of the measured volume with a model tree

Boosting the ensemble regressor is formed from a set ofweak base regressors trained by applying the same learningalgorithm to different sets obtained from the training setIn Bagging each base regressor is trained with a datasetobtained from random sampling with replacement [55] (iea particular instance may appear repeated several times ormay not be considered in any base regressor) As a resultthe base regressors are independent However Boosting usesall the instances and a set of weights to train each baseregressor Each instance has a weight pointing out howimportant it is to predict that instance correctly Some baseregressors can take weights into account (eg decision trees)Boosting trains base regressors sequentially because errorsfor training instances in the previous base regressor areused for reweighting The new base regressors are focusedon instances that previous base regressors have wronglypredictedThe voting system for Boosting is also weighted bythe accuracy of each base regressor [57]

Random Subspaces follow a different approach eachbase regressor is trained in a subset of fewer dimensionsthan the original space This subset of features is randomlychosen for all regressors This procedure is followed with theintention of avoiding the well-known problem of the curse ofdimensionality that occurs with many regressors when thereare many features as well as improving accuracy by choosing

base regressors with low correlations between them In totalwe have used five ensemble regression techniques of the stateof the art for regression two variants of Bagging one variantof Boosting and Random Subspaces The list of ensemblemethods used in the experimentation is enumerated below

(i) Bagging is in its initial formulation for regression(ii) Iterated Bagging combines several Bagging ensem-

bles the first one keeping to a typical construction andthe others using residuals (differences between thereal and the predicted values) for training purposes[58]

(iii) Random Subspaces are in their formulation forregression

(iv) AdaBoostR2 [59] is a boosting implementation forregression Calculated from the absolute errors ofeach training example 119897(119894) = |119891

119877(119909119894) minus 119910119894| the so-

called loss function 119871(119894) was used to estimate theerror of each base regressor and to assign a suitableweight to each one Let Den be the maximum valueof 119897(119894) in the training set then three different lossfunctions are used linear 119871

119897(119894) = 119897(119894)Den square

119871119878(119894) = [119897(119894)Den]2 and exponential 119871

119864(119894) = 1 minus

exp(minus119897(119894)Den)(v) Additive regression is this regressor that has a learn-

ing algorithm called Stochastic Gradient Boosting[60] which is a modification of Adaptive Bagging ahybrid Bagging Boosting procedure intended for leastsquares fitting on additive expansions [60]

34 k-Nearest Neighbor Regressor This regressor is the mostrepresentative algorithm among the instance-based learningThese kinds ofmethods forecast the output value using storedvalues of the most similar instances of the training data [61]The estimation is the mean of the k most similar traininginstances Two configuration decisions have to be taken

(i) how many nearest neighbors to use to forecast thevalue of a new instance

(ii) which distance function to use to measure the simi-larity between the instances

In our experimentation we have used the most commondefinition of the distance function the Euclidean distancewhile the number of neighbors is optimized using cross-validation

35 Support Vector Regressor This kind of regressor is basedon a parametric function the parameters of which areoptimized during the training process in order to minimizethe RMSE [62] Mathematically the goal is to find a function119891(119909) that has the most deviation 120598 from the targets thatare actually obtained 119910

119894 for all the training data and at the

same time is as flat as possible The following equation is anexample of SVR with the particular case of a linear functioncalled linear SVR where ⟨sdot sdot⟩ denotes the inner product inthe input space119883

119891 (119909) = ⟨119908 119909⟩ + 119887 with 119908 isin 119883 119887 isin 119877 (6)

Journal of Applied Mathematics 7

Final prediction

Voting systemsum

Base regressor 1 Base regressor 2 Base regressor n

y = f(x)

X

X

X X

Y

Y Y Y

X1 Y1

Xm Ym

middot middot middot middot middot middot middot middot middot middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

Leve

l2

outp

utLe

vel0

or

igin

al d

ata

Leve

l1

base

regr

esso

rs

Figure 4 Ensemble regressor architecture

The norms of 119908 have to be minimized to find a flat func-tion but in real data solving this optimization problem can beunfeasible In consequence Boser et al [37] introduced threeterms into the formulation the slack variables 120585 120585lowast and C atrade-off parameter between the flatness and the deviationsof the errors larger than 120598 In the following equation theoptimization problem is shown that is associated with a linearSVR

minimize 1

21199082+ 119862

119897

sum

119894=1

(120585119894+ 120585lowast

119894)

st 119910119894minus ⟨119908 119909

119894⟩ minus 119887 le 120598 + 120585

119894

⟨119908 119909119894⟩ + 119887 minus 119910

119894le 120598 + 120585

lowast

119894

120585119894 120585lowast

119894ge 0

(7)

We have an optimization problem of the convex typethat is solved in practice using the Lagrange method Theequations are rewritten using the primal objective functionand the corresponding constraints a process in which the so-called dual problem (see (8)) is obtained as follows

maximize minus1

2

119897

sum

119894119895=1

(120572119894minus 120572lowast

119894) (120572119895minus 120572lowast

119895) ⟨119909119894 119909119895⟩

minus 120598

119897

sum

119894=1

(120572119894+ 120572lowast

119894) +

119897

sum

119894=1

119910119894(120572119894minus 120572lowast

119894)

st119897

sum

119894=1

(120572119894minus 120572lowast

119894) = 0

120572119894 120572lowast

119894isin [0 119862]

(8)

From the expression in (8) it is possible to generalizethe formulation of the SVR in terms of a nonlinear function

Instead of calculating the inner products in the originalfeature space a kernel function 119896(119909 1199091015840) that computes theinner product in a transformed spacewas definedThis kernelfunction has to satisfy the so-called Mercerrsquos conditions [63]In our experimentation we have used the two most popularkernels in the literature [40] linear and radial basis

36 Artificial Neural Networks We used the multilayer per-ceptron (MLP) the most popular ANN variant [64] in ourexperimental work It has been demonstrated to be a univer-sal approximator of functions [65] ANNs are a particular caseof neural networks the mathematical formulation of whichis inspired by biological functions as they aim to emulatethe behavior of a set of neurons [64] This network has threelayers [66] one with the network inputs (features of the dataset) a hidden layer and an output layer where the predictionis assigned to each input instance Firstly the output of thehidden layer 119910hide is calculated from the inputs and then theoutput 119910hide is obtained according to the expressions shownin the following equation [66]

119910hide = 119891net (1198821119909 + 1198611)

119910output = 119891output (1198822119910hide + 1198612)

(9)

where 1198821is the weight matrix of the hidden layer 119861

1is the

bias of the hidden layer1198822is the weight matrix of the output

layer (eg the identitymatrix in the configurations tested)1198612

is the bias of the output layer 119891net is the activation functionof the hidden layer and 119891output is the activation function ofthe output layer These two functions depend on the chosenstructure but are typically the identity for the hidden layer and119905119886119899119904119894119892 for the output layer [66]

8 Journal of Applied Mathematics

Table 6 Methods notation

Bagging BGIterated Bagging IBRandom Subspaces RSAdaboostR2 R2Additive Regression ARREPTree RPM5P Model Tree M5PSupport Vector Regressor SVRMultilayer Perceptron MLP119896-Nearest Neighbor Regressor 119896NNLinear Regression LR

4 Results and Discussion

We compared the RMSE obtained for the regressors ina 10 times 10 cross-validation in order to choose the method thatis most suited to model this industrial problem The experi-ments were completed using theWEKA [20] implementationof the methods described above

All the ensembles under consideration have 100 baseregressors The methods that depend on a set of parametersare optimized as follows

(i) SVR with linear kernel the trade-off parameter 119862 inthe range 2ndash8

(ii) SVR with radial basis kernel 119862 from 1 to 16 and theparameter of the radial basis function gamma from10minus5 to 10

minus2(iii) multilayer perceptron the training parameters

momentum learning rate and number of neurons areoptimized in the ranges 01ndash04 01ndash06 and 5ndash15

(iv) kNN the number of neighbours is optimized from 1to 10

The notation used to describe the methods is detailed inTable 6

Regarding the notation two abbreviations have been usedbesides those that are indicated in Table 6 On the one handwe used the suffixes ldquoLrdquo ldquoSrdquo and ldquoErdquo for the linear square andexponential loss functions of AdaboostR2 and on the otherhand the trees that are either pruned (119875) or unpruned (119880)

appear between brackets Tables 7 8 and 9 set out the RMSEof each of the 14 outputs for each method

Finally a summary tablewith the bestmethods per outputis shown The indexes of the 39 methods that were tested areexplained in Tables 10 and 11 and in Table 12 the methodwith minimal RMSE is indicated according to the notationfor these indexes In the third column we also indicatethose methods that have larger RMSE but using a correctedresampled t-test [67] the differences are not statistically sig-nificative at a confidence level of 95 Analyzing the secondcolumn of Table 12 among the 39 configurations testedonly 4 methods obtained the best RMSE for one of the 14outputs AdaboostR2 with exponential loss and prunedM5Pas base regressorsmdashindex 26mdash(5 times) SVR with radial

basis function kernelmdashindex 6mdash(4 times) Iterated Baggingwith unprunedM5P as base regressorsmdashindex 15mdash(4 times)and Bagging with unpruned RP as the base regressor - index9mdash(1 time)

The performance of eachmethodmay be ranked Table 13presents the number of significative weaker performances ofeach method considering the 14 outputs modeled The mostrobust method is Iterated Bagging with unpruned M5P asbase regressors - index 15 - as in none of the 14 outputs wasit outperformed by other methods Besides selecting the bestmethod it is possible to obtain some additional conclusionsfrom this ranking table

(i) Linear models like SVR linearmdashindex 5mdash and LRmdashindex 3mdashdo not fit the datasets in the study very wellBothmethods are ranked together at themiddle of thetableThe predicted variables therefore needmethodsthat can operate with nonlinearities

(ii) SVR with Radial Basis Function Kernelmdashindex 6mdashis the only nonensemble method with competitiveresults but it needs to tune 2 parametersMLPmdashindex7mdashis not a good choice It needs to tune 3 parametersand is not a well-ranked method

(iii) For some ensemble configurations there are dif-ferences in the number of statistically significativeperformances between the results from pruned andunpruned trees while in other cases these differ-ences do not exist in general though the resultsof unpruned trees are more accurate specially withthe top-ranked methods In fact the only regressorwhich is not outperformed by other methods inany output has unpruned trees Unpruned trees aremore sensitive to changes in the training set Sothe predictions of unpruned trees when their baseregressors are trained in an ensemble are more likelyto output diverse predictions If the predictions of allbase regressors agreed there would be little benefit inusing ensembles Diversity balances faulty predictionsby some base regressors with correct predictions byothers

(iv) The top-ranked ensembles use the most accurate baseregressor (ie M5P) All M5P configurations havefewer weaker performances than the correspondingRP configuration In particular the lowest rank wasassigned to theAR -RP configurationswhileARM5PU came second best

(v) Ensembles that lose a lot of information such as RSare ranked at the bottom of the table The table showsthat the lower the percentage of features RS use theworse they perform In comparison to other ensemblemethods RS is a method that is very insensitive tonoise so it can point to data that are not noisy

(vi) In R2 M5P ensembles the loss function does notappear to be an important configuration parameter

(vii) IB M5P U is the only configuration that never hadsignificant losses when compared with the othermethods

Journal of Applied Mathematics 9

Table 7 Root mean squared error 13

Volume Depth Width Length MRRRP 21648227 2689 581 579 1716673M5P 21077329 2301 451 553 1607633LR 19752189 2384 704 1473 1501593119896NN 19375662 2357 542 684 1584242SVR linear 1978945 2419 707 1623 1480032SVR radial basis 20077424 1885 438 535 1460337MLP 20764699 2298 47 539 1629243BG RP P 20021292 2149 497 508 1552679BG RP U 2052908 1998 466 49 1560389BG M5P P 20063657 2061 443 527 1529392BG M5P U 19754675 195 431 538 150221IB RP P 20276785 2205 487 514 1567757IB RP U 21938893 217 501 511 1591196IB M5P P 19783383 208 442 488 1531801IB M5P U 19515416 1965 43 478 1483083R2-L RP P 19176523 2227 494 507 1578847R2-L RP U 20636982 2171 525 529 1700431R2-L M5P P 18684392 2081 437 484 1503137R2-L M5P U 1815874 2051 434 484 1520901R2-S RP P 19390839 2303 51 518 1600743R2-S RP U 20045321 2121 511 531 1640124R2-S M5P P 17311772 2083 449 471 1507005R2-S M5P U 17391436 2087 452 472 1524581R2-E RP P 19252931 2259 502 507 1585403R2-E RP U 20592086 2144 522 521 1731845R2-E M5P P 17105622 2125 453 466 1507801R2-E M5P U 17294895 2132 455 467 1509023AR RP P 21575039 2551 542 556 1724982AR RP U 27446743 2471 583 615 1862828AR M5P P 20880584 2227 444 508 160765AR M5P U 19457279 1958 451 494 1553829RS 50 RP P 20052636 2658 568 774 1585427RS 50 RP U 20194624 2533 535 754 1566928RS 50 M5P P 20101327 266 538 1542 1540277RS 50 M5P U 19916684 2565 539 1545 1534984RS 75 RP P 19927003 2333 527 598 1586158RS 75 RP U 20784567 2161 504 625 1625164RS 75 M5P P 19964896 2355 465 829 1522787RS 75 M5P U 19742035 2211 46 839 1529571

Once the best data-mining technique for this industrialtask is identified the industrial implementation of theseresults can follow the procedure outlined below

(1) The best model is run to predict one output bychanging two input variables of the process in smallsteps and maintaining a fixed value for the otherinputs

(2) 3D plots of the output related to the two varied inputsshould be generatedThe process engineer can extractinformation from these 3D plots on the best millingconditions

As an example of this methodology the following casewas built Two Iterated Bagging ensembles with unprunedM5P as their base regressors are built for two outputs thewidth and the depth errors respectively Then the modelswere run by varying two inputs in small steps across thetest range pulse intensity (PI) and scanning speed (SS) Thiscombination of inputs and outputs presents an immediateinterest from the industrial point of view because in aworkshop the DES geometry is fixed by the customer andthe process engineer can only change three parameters of thelaser milling process (pulse frequency (PF) scanning speed

10 Journal of Applied Mathematics

Table 8 Root mean squared error 23

Volume error Depth error Width error Length error MRR errorRP 21687574 3066 591 562 1635557M5P 21450091 198 619 509 1600984LR 19752193 2392 676 1467 1496309119896NN 19369636 2369 498 66 1463651SVR linear 19781788 2417 71 1623 1478477SVR radial basis 20078596 1898 442 537 1450461MLP 20675361 2196 493 537 1738269BG RP P 20103757 2318 462 498 1512257BG RP U 20534153 216 44 487 153272BG M5P P 20059431 1966 532 508 151625BG M5P U 19757549 1919 522 515 1486054IB RP P 207848 2305 478 518 1552556IB RP U 21663114 2379 479 52 1653231IB M5P P 20145065 1973 444 471 1517767IB M5P U 19826122 1944 433 458 1493358R2-L RP P 2013655 2408 464 503 1544266R2-L RP U 20850097 2395 487 538 167783R2-L M5P P 1847995 2061 468 477 147324R2-L M5P U 18374085 2071 47 481 1481765R2-S RP P 19559243 2475 465 524 1610762R2-S RP U 20101769 2287 471 539 1587167R2-S M5P P 17277515 2109 453 472 1445998R2-S M5P U 17389235 2088 452 474 1449738R2-E RP P 19565769 2426 462 512 1564502R2-E RP U 20627571 2438 482 535 1672126R2-E M5P P 17219689 2161 457 475 1432448R2-E M5P U 17335689 2161 458 479 1437136AR RP P 2149783 2777 549 543 1620807AR RP U 27192655 2743 529 61 2057127AR M5P P 21132963 198 455 466 1599584AR M5P U 1954975 1955 478 482 1544838RS 50 RP P 20077517 2854 541 693 1521765RS 50 RP U 2023542 2684 524 694 1507478RS 50 M5P P 20105445 2591 598 1181 1538734RS 50 M5P U 1992461 2561 579 1175 1494089RS 75 RP P 19950189 2586 49 573 1516501RS 75 RP U 20646756 245 475 606 157435RS 75 M5P P 20016047 2205 576 666 151852RS 75 M5P U 19743178 2179 553 675 1472975

and pulse intensity) In viewof these restrictions the engineerwill wish to know the expected errors for the DES geometrydepending on the laser parameters that can be changedThe rest of the inputs for the models (DES geometry) arefixed at 70 120583m depth 65 120583m width and 0 120583m length and

PF is fixed at 45KHz Figure 5 shows the 3D plots obtainedfrom these calculations allowing the process engineer tochoose the following milling conditions SS in the range of325ndash470mms and PI in the range of 74ndash91 for minimumerrors in DES depth and width

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

Journal of Applied Mathematics 7

Final prediction

Voting systemsum

Base regressor 1 Base regressor 2 Base regressor n

y = f(x)

X

X

X X

Y

Y Y Y

X1 Y1

Xm Ym

middot middot middot middot middot middot middot middot middot middot middot middotmiddot middot middot

middot middot middot middot middot middot

middot middot middot

Leve

l2

outp

utLe

vel0

or

igin

al d

ata

Leve

l1

base

regr

esso

rs

Figure 4 Ensemble regressor architecture

The norms of 119908 have to be minimized to find a flat func-tion but in real data solving this optimization problem can beunfeasible In consequence Boser et al [37] introduced threeterms into the formulation the slack variables 120585 120585lowast and C atrade-off parameter between the flatness and the deviationsof the errors larger than 120598 In the following equation theoptimization problem is shown that is associated with a linearSVR

minimize 1

21199082+ 119862

119897

sum

119894=1

(120585119894+ 120585lowast

119894)

st 119910119894minus ⟨119908 119909

119894⟩ minus 119887 le 120598 + 120585

119894

⟨119908 119909119894⟩ + 119887 minus 119910

119894le 120598 + 120585

lowast

119894

120585119894 120585lowast

119894ge 0

(7)

We have an optimization problem of the convex typethat is solved in practice using the Lagrange method Theequations are rewritten using the primal objective functionand the corresponding constraints a process in which the so-called dual problem (see (8)) is obtained as follows

maximize minus1

2

119897

sum

119894119895=1

(120572119894minus 120572lowast

119894) (120572119895minus 120572lowast

119895) ⟨119909119894 119909119895⟩

minus 120598

119897

sum

119894=1

(120572119894+ 120572lowast

119894) +

119897

sum

119894=1

119910119894(120572119894minus 120572lowast

119894)

st119897

sum

119894=1

(120572119894minus 120572lowast

119894) = 0

120572119894 120572lowast

119894isin [0 119862]

(8)

From the expression in (8) it is possible to generalizethe formulation of the SVR in terms of a nonlinear function

Instead of calculating the inner products in the originalfeature space a kernel function 119896(119909 1199091015840) that computes theinner product in a transformed spacewas definedThis kernelfunction has to satisfy the so-called Mercerrsquos conditions [63]In our experimentation we have used the two most popularkernels in the literature [40] linear and radial basis

36 Artificial Neural Networks We used the multilayer per-ceptron (MLP) the most popular ANN variant [64] in ourexperimental work It has been demonstrated to be a univer-sal approximator of functions [65] ANNs are a particular caseof neural networks the mathematical formulation of whichis inspired by biological functions as they aim to emulatethe behavior of a set of neurons [64] This network has threelayers [66] one with the network inputs (features of the dataset) a hidden layer and an output layer where the predictionis assigned to each input instance Firstly the output of thehidden layer 119910hide is calculated from the inputs and then theoutput 119910hide is obtained according to the expressions shownin the following equation [66]

119910hide = 119891net (1198821119909 + 1198611)

119910output = 119891output (1198822119910hide + 1198612)

(9)

where 1198821is the weight matrix of the hidden layer 119861

1is the

bias of the hidden layer1198822is the weight matrix of the output

layer (eg the identitymatrix in the configurations tested)1198612

is the bias of the output layer 119891net is the activation functionof the hidden layer and 119891output is the activation function ofthe output layer These two functions depend on the chosenstructure but are typically the identity for the hidden layer and119905119886119899119904119894119892 for the output layer [66]

8 Journal of Applied Mathematics

Table 6 Methods notation

Bagging BGIterated Bagging IBRandom Subspaces RSAdaboostR2 R2Additive Regression ARREPTree RPM5P Model Tree M5PSupport Vector Regressor SVRMultilayer Perceptron MLP119896-Nearest Neighbor Regressor 119896NNLinear Regression LR

4 Results and Discussion

We compared the RMSE obtained for the regressors ina 10 times 10 cross-validation in order to choose the method thatis most suited to model this industrial problem The experi-ments were completed using theWEKA [20] implementationof the methods described above

All the ensembles under consideration have 100 baseregressors The methods that depend on a set of parametersare optimized as follows

(i) SVR with linear kernel the trade-off parameter 119862 inthe range 2ndash8

(ii) SVR with radial basis kernel 119862 from 1 to 16 and theparameter of the radial basis function gamma from10minus5 to 10

minus2(iii) multilayer perceptron the training parameters

momentum learning rate and number of neurons areoptimized in the ranges 01ndash04 01ndash06 and 5ndash15

(iv) kNN the number of neighbours is optimized from 1to 10

The notation used to describe the methods is detailed inTable 6

Regarding the notation two abbreviations have been usedbesides those that are indicated in Table 6 On the one handwe used the suffixes ldquoLrdquo ldquoSrdquo and ldquoErdquo for the linear square andexponential loss functions of AdaboostR2 and on the otherhand the trees that are either pruned (119875) or unpruned (119880)

appear between brackets Tables 7 8 and 9 set out the RMSEof each of the 14 outputs for each method

Finally a summary tablewith the bestmethods per outputis shown The indexes of the 39 methods that were tested areexplained in Tables 10 and 11 and in Table 12 the methodwith minimal RMSE is indicated according to the notationfor these indexes In the third column we also indicatethose methods that have larger RMSE but using a correctedresampled t-test [67] the differences are not statistically sig-nificative at a confidence level of 95 Analyzing the secondcolumn of Table 12 among the 39 configurations testedonly 4 methods obtained the best RMSE for one of the 14outputs AdaboostR2 with exponential loss and prunedM5Pas base regressorsmdashindex 26mdash(5 times) SVR with radial

basis function kernelmdashindex 6mdash(4 times) Iterated Baggingwith unprunedM5P as base regressorsmdashindex 15mdash(4 times)and Bagging with unpruned RP as the base regressor - index9mdash(1 time)

The performance of eachmethodmay be ranked Table 13presents the number of significative weaker performances ofeach method considering the 14 outputs modeled The mostrobust method is Iterated Bagging with unpruned M5P asbase regressors - index 15 - as in none of the 14 outputs wasit outperformed by other methods Besides selecting the bestmethod it is possible to obtain some additional conclusionsfrom this ranking table

(i) Linear models like SVR linearmdashindex 5mdash and LRmdashindex 3mdashdo not fit the datasets in the study very wellBothmethods are ranked together at themiddle of thetableThe predicted variables therefore needmethodsthat can operate with nonlinearities

(ii) SVR with Radial Basis Function Kernelmdashindex 6mdashis the only nonensemble method with competitiveresults but it needs to tune 2 parametersMLPmdashindex7mdashis not a good choice It needs to tune 3 parametersand is not a well-ranked method

(iii) For some ensemble configurations there are dif-ferences in the number of statistically significativeperformances between the results from pruned andunpruned trees while in other cases these differ-ences do not exist in general though the resultsof unpruned trees are more accurate specially withthe top-ranked methods In fact the only regressorwhich is not outperformed by other methods inany output has unpruned trees Unpruned trees aremore sensitive to changes in the training set Sothe predictions of unpruned trees when their baseregressors are trained in an ensemble are more likelyto output diverse predictions If the predictions of allbase regressors agreed there would be little benefit inusing ensembles Diversity balances faulty predictionsby some base regressors with correct predictions byothers

(iv) The top-ranked ensembles use the most accurate baseregressor (ie M5P) All M5P configurations havefewer weaker performances than the correspondingRP configuration In particular the lowest rank wasassigned to theAR -RP configurationswhileARM5PU came second best

(v) Ensembles that lose a lot of information such as RSare ranked at the bottom of the table The table showsthat the lower the percentage of features RS use theworse they perform In comparison to other ensemblemethods RS is a method that is very insensitive tonoise so it can point to data that are not noisy

(vi) In R2 M5P ensembles the loss function does notappear to be an important configuration parameter

(vii) IB M5P U is the only configuration that never hadsignificant losses when compared with the othermethods

Journal of Applied Mathematics 9

Table 7 Root mean squared error 13

Volume Depth Width Length MRRRP 21648227 2689 581 579 1716673M5P 21077329 2301 451 553 1607633LR 19752189 2384 704 1473 1501593119896NN 19375662 2357 542 684 1584242SVR linear 1978945 2419 707 1623 1480032SVR radial basis 20077424 1885 438 535 1460337MLP 20764699 2298 47 539 1629243BG RP P 20021292 2149 497 508 1552679BG RP U 2052908 1998 466 49 1560389BG M5P P 20063657 2061 443 527 1529392BG M5P U 19754675 195 431 538 150221IB RP P 20276785 2205 487 514 1567757IB RP U 21938893 217 501 511 1591196IB M5P P 19783383 208 442 488 1531801IB M5P U 19515416 1965 43 478 1483083R2-L RP P 19176523 2227 494 507 1578847R2-L RP U 20636982 2171 525 529 1700431R2-L M5P P 18684392 2081 437 484 1503137R2-L M5P U 1815874 2051 434 484 1520901R2-S RP P 19390839 2303 51 518 1600743R2-S RP U 20045321 2121 511 531 1640124R2-S M5P P 17311772 2083 449 471 1507005R2-S M5P U 17391436 2087 452 472 1524581R2-E RP P 19252931 2259 502 507 1585403R2-E RP U 20592086 2144 522 521 1731845R2-E M5P P 17105622 2125 453 466 1507801R2-E M5P U 17294895 2132 455 467 1509023AR RP P 21575039 2551 542 556 1724982AR RP U 27446743 2471 583 615 1862828AR M5P P 20880584 2227 444 508 160765AR M5P U 19457279 1958 451 494 1553829RS 50 RP P 20052636 2658 568 774 1585427RS 50 RP U 20194624 2533 535 754 1566928RS 50 M5P P 20101327 266 538 1542 1540277RS 50 M5P U 19916684 2565 539 1545 1534984RS 75 RP P 19927003 2333 527 598 1586158RS 75 RP U 20784567 2161 504 625 1625164RS 75 M5P P 19964896 2355 465 829 1522787RS 75 M5P U 19742035 2211 46 839 1529571

Once the best data-mining technique for this industrialtask is identified the industrial implementation of theseresults can follow the procedure outlined below

(1) The best model is run to predict one output bychanging two input variables of the process in smallsteps and maintaining a fixed value for the otherinputs

(2) 3D plots of the output related to the two varied inputsshould be generatedThe process engineer can extractinformation from these 3D plots on the best millingconditions

As an example of this methodology the following casewas built Two Iterated Bagging ensembles with unprunedM5P as their base regressors are built for two outputs thewidth and the depth errors respectively Then the modelswere run by varying two inputs in small steps across thetest range pulse intensity (PI) and scanning speed (SS) Thiscombination of inputs and outputs presents an immediateinterest from the industrial point of view because in aworkshop the DES geometry is fixed by the customer andthe process engineer can only change three parameters of thelaser milling process (pulse frequency (PF) scanning speed

10 Journal of Applied Mathematics

Table 8 Root mean squared error 23

Volume error Depth error Width error Length error MRR errorRP 21687574 3066 591 562 1635557M5P 21450091 198 619 509 1600984LR 19752193 2392 676 1467 1496309119896NN 19369636 2369 498 66 1463651SVR linear 19781788 2417 71 1623 1478477SVR radial basis 20078596 1898 442 537 1450461MLP 20675361 2196 493 537 1738269BG RP P 20103757 2318 462 498 1512257BG RP U 20534153 216 44 487 153272BG M5P P 20059431 1966 532 508 151625BG M5P U 19757549 1919 522 515 1486054IB RP P 207848 2305 478 518 1552556IB RP U 21663114 2379 479 52 1653231IB M5P P 20145065 1973 444 471 1517767IB M5P U 19826122 1944 433 458 1493358R2-L RP P 2013655 2408 464 503 1544266R2-L RP U 20850097 2395 487 538 167783R2-L M5P P 1847995 2061 468 477 147324R2-L M5P U 18374085 2071 47 481 1481765R2-S RP P 19559243 2475 465 524 1610762R2-S RP U 20101769 2287 471 539 1587167R2-S M5P P 17277515 2109 453 472 1445998R2-S M5P U 17389235 2088 452 474 1449738R2-E RP P 19565769 2426 462 512 1564502R2-E RP U 20627571 2438 482 535 1672126R2-E M5P P 17219689 2161 457 475 1432448R2-E M5P U 17335689 2161 458 479 1437136AR RP P 2149783 2777 549 543 1620807AR RP U 27192655 2743 529 61 2057127AR M5P P 21132963 198 455 466 1599584AR M5P U 1954975 1955 478 482 1544838RS 50 RP P 20077517 2854 541 693 1521765RS 50 RP U 2023542 2684 524 694 1507478RS 50 M5P P 20105445 2591 598 1181 1538734RS 50 M5P U 1992461 2561 579 1175 1494089RS 75 RP P 19950189 2586 49 573 1516501RS 75 RP U 20646756 245 475 606 157435RS 75 M5P P 20016047 2205 576 666 151852RS 75 M5P U 19743178 2179 553 675 1472975

and pulse intensity) In viewof these restrictions the engineerwill wish to know the expected errors for the DES geometrydepending on the laser parameters that can be changedThe rest of the inputs for the models (DES geometry) arefixed at 70 120583m depth 65 120583m width and 0 120583m length and

PF is fixed at 45KHz Figure 5 shows the 3D plots obtainedfrom these calculations allowing the process engineer tochoose the following milling conditions SS in the range of325ndash470mms and PI in the range of 74ndash91 for minimumerrors in DES depth and width

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

8 Journal of Applied Mathematics

Table 6 Methods notation

Bagging BGIterated Bagging IBRandom Subspaces RSAdaboostR2 R2Additive Regression ARREPTree RPM5P Model Tree M5PSupport Vector Regressor SVRMultilayer Perceptron MLP119896-Nearest Neighbor Regressor 119896NNLinear Regression LR

4 Results and Discussion

We compared the RMSE obtained for the regressors ina 10 times 10 cross-validation in order to choose the method thatis most suited to model this industrial problem The experi-ments were completed using theWEKA [20] implementationof the methods described above

All the ensembles under consideration have 100 baseregressors The methods that depend on a set of parametersare optimized as follows

(i) SVR with linear kernel the trade-off parameter 119862 inthe range 2ndash8

(ii) SVR with radial basis kernel 119862 from 1 to 16 and theparameter of the radial basis function gamma from10minus5 to 10

minus2(iii) multilayer perceptron the training parameters

momentum learning rate and number of neurons areoptimized in the ranges 01ndash04 01ndash06 and 5ndash15

(iv) kNN the number of neighbours is optimized from 1to 10

The notation used to describe the methods is detailed inTable 6

Regarding the notation two abbreviations have been usedbesides those that are indicated in Table 6 On the one handwe used the suffixes ldquoLrdquo ldquoSrdquo and ldquoErdquo for the linear square andexponential loss functions of AdaboostR2 and on the otherhand the trees that are either pruned (119875) or unpruned (119880)

appear between brackets Tables 7 8 and 9 set out the RMSEof each of the 14 outputs for each method

Finally a summary tablewith the bestmethods per outputis shown The indexes of the 39 methods that were tested areexplained in Tables 10 and 11 and in Table 12 the methodwith minimal RMSE is indicated according to the notationfor these indexes In the third column we also indicatethose methods that have larger RMSE but using a correctedresampled t-test [67] the differences are not statistically sig-nificative at a confidence level of 95 Analyzing the secondcolumn of Table 12 among the 39 configurations testedonly 4 methods obtained the best RMSE for one of the 14outputs AdaboostR2 with exponential loss and prunedM5Pas base regressorsmdashindex 26mdash(5 times) SVR with radial

basis function kernelmdashindex 6mdash(4 times) Iterated Baggingwith unprunedM5P as base regressorsmdashindex 15mdash(4 times)and Bagging with unpruned RP as the base regressor - index9mdash(1 time)

The performance of eachmethodmay be ranked Table 13presents the number of significative weaker performances ofeach method considering the 14 outputs modeled The mostrobust method is Iterated Bagging with unpruned M5P asbase regressors - index 15 - as in none of the 14 outputs wasit outperformed by other methods Besides selecting the bestmethod it is possible to obtain some additional conclusionsfrom this ranking table

(i) Linear models like SVR linearmdashindex 5mdash and LRmdashindex 3mdashdo not fit the datasets in the study very wellBothmethods are ranked together at themiddle of thetableThe predicted variables therefore needmethodsthat can operate with nonlinearities

(ii) SVR with Radial Basis Function Kernelmdashindex 6mdashis the only nonensemble method with competitiveresults but it needs to tune 2 parametersMLPmdashindex7mdashis not a good choice It needs to tune 3 parametersand is not a well-ranked method

(iii) For some ensemble configurations there are dif-ferences in the number of statistically significativeperformances between the results from pruned andunpruned trees while in other cases these differ-ences do not exist in general though the resultsof unpruned trees are more accurate specially withthe top-ranked methods In fact the only regressorwhich is not outperformed by other methods inany output has unpruned trees Unpruned trees aremore sensitive to changes in the training set Sothe predictions of unpruned trees when their baseregressors are trained in an ensemble are more likelyto output diverse predictions If the predictions of allbase regressors agreed there would be little benefit inusing ensembles Diversity balances faulty predictionsby some base regressors with correct predictions byothers

(iv) The top-ranked ensembles use the most accurate baseregressor (ie M5P) All M5P configurations havefewer weaker performances than the correspondingRP configuration In particular the lowest rank wasassigned to theAR -RP configurationswhileARM5PU came second best

(v) Ensembles that lose a lot of information such as RSare ranked at the bottom of the table The table showsthat the lower the percentage of features RS use theworse they perform In comparison to other ensemblemethods RS is a method that is very insensitive tonoise so it can point to data that are not noisy

(vi) In R2 M5P ensembles the loss function does notappear to be an important configuration parameter

(vii) IB M5P U is the only configuration that never hadsignificant losses when compared with the othermethods

Journal of Applied Mathematics 9

Table 7 Root mean squared error 13

Volume Depth Width Length MRRRP 21648227 2689 581 579 1716673M5P 21077329 2301 451 553 1607633LR 19752189 2384 704 1473 1501593119896NN 19375662 2357 542 684 1584242SVR linear 1978945 2419 707 1623 1480032SVR radial basis 20077424 1885 438 535 1460337MLP 20764699 2298 47 539 1629243BG RP P 20021292 2149 497 508 1552679BG RP U 2052908 1998 466 49 1560389BG M5P P 20063657 2061 443 527 1529392BG M5P U 19754675 195 431 538 150221IB RP P 20276785 2205 487 514 1567757IB RP U 21938893 217 501 511 1591196IB M5P P 19783383 208 442 488 1531801IB M5P U 19515416 1965 43 478 1483083R2-L RP P 19176523 2227 494 507 1578847R2-L RP U 20636982 2171 525 529 1700431R2-L M5P P 18684392 2081 437 484 1503137R2-L M5P U 1815874 2051 434 484 1520901R2-S RP P 19390839 2303 51 518 1600743R2-S RP U 20045321 2121 511 531 1640124R2-S M5P P 17311772 2083 449 471 1507005R2-S M5P U 17391436 2087 452 472 1524581R2-E RP P 19252931 2259 502 507 1585403R2-E RP U 20592086 2144 522 521 1731845R2-E M5P P 17105622 2125 453 466 1507801R2-E M5P U 17294895 2132 455 467 1509023AR RP P 21575039 2551 542 556 1724982AR RP U 27446743 2471 583 615 1862828AR M5P P 20880584 2227 444 508 160765AR M5P U 19457279 1958 451 494 1553829RS 50 RP P 20052636 2658 568 774 1585427RS 50 RP U 20194624 2533 535 754 1566928RS 50 M5P P 20101327 266 538 1542 1540277RS 50 M5P U 19916684 2565 539 1545 1534984RS 75 RP P 19927003 2333 527 598 1586158RS 75 RP U 20784567 2161 504 625 1625164RS 75 M5P P 19964896 2355 465 829 1522787RS 75 M5P U 19742035 2211 46 839 1529571

Once the best data-mining technique for this industrialtask is identified the industrial implementation of theseresults can follow the procedure outlined below

(1) The best model is run to predict one output bychanging two input variables of the process in smallsteps and maintaining a fixed value for the otherinputs

(2) 3D plots of the output related to the two varied inputsshould be generatedThe process engineer can extractinformation from these 3D plots on the best millingconditions

As an example of this methodology the following casewas built Two Iterated Bagging ensembles with unprunedM5P as their base regressors are built for two outputs thewidth and the depth errors respectively Then the modelswere run by varying two inputs in small steps across thetest range pulse intensity (PI) and scanning speed (SS) Thiscombination of inputs and outputs presents an immediateinterest from the industrial point of view because in aworkshop the DES geometry is fixed by the customer andthe process engineer can only change three parameters of thelaser milling process (pulse frequency (PF) scanning speed

10 Journal of Applied Mathematics

Table 8 Root mean squared error 23

Volume error Depth error Width error Length error MRR errorRP 21687574 3066 591 562 1635557M5P 21450091 198 619 509 1600984LR 19752193 2392 676 1467 1496309119896NN 19369636 2369 498 66 1463651SVR linear 19781788 2417 71 1623 1478477SVR radial basis 20078596 1898 442 537 1450461MLP 20675361 2196 493 537 1738269BG RP P 20103757 2318 462 498 1512257BG RP U 20534153 216 44 487 153272BG M5P P 20059431 1966 532 508 151625BG M5P U 19757549 1919 522 515 1486054IB RP P 207848 2305 478 518 1552556IB RP U 21663114 2379 479 52 1653231IB M5P P 20145065 1973 444 471 1517767IB M5P U 19826122 1944 433 458 1493358R2-L RP P 2013655 2408 464 503 1544266R2-L RP U 20850097 2395 487 538 167783R2-L M5P P 1847995 2061 468 477 147324R2-L M5P U 18374085 2071 47 481 1481765R2-S RP P 19559243 2475 465 524 1610762R2-S RP U 20101769 2287 471 539 1587167R2-S M5P P 17277515 2109 453 472 1445998R2-S M5P U 17389235 2088 452 474 1449738R2-E RP P 19565769 2426 462 512 1564502R2-E RP U 20627571 2438 482 535 1672126R2-E M5P P 17219689 2161 457 475 1432448R2-E M5P U 17335689 2161 458 479 1437136AR RP P 2149783 2777 549 543 1620807AR RP U 27192655 2743 529 61 2057127AR M5P P 21132963 198 455 466 1599584AR M5P U 1954975 1955 478 482 1544838RS 50 RP P 20077517 2854 541 693 1521765RS 50 RP U 2023542 2684 524 694 1507478RS 50 M5P P 20105445 2591 598 1181 1538734RS 50 M5P U 1992461 2561 579 1175 1494089RS 75 RP P 19950189 2586 49 573 1516501RS 75 RP U 20646756 245 475 606 157435RS 75 M5P P 20016047 2205 576 666 151852RS 75 M5P U 19743178 2179 553 675 1472975

and pulse intensity) In viewof these restrictions the engineerwill wish to know the expected errors for the DES geometrydepending on the laser parameters that can be changedThe rest of the inputs for the models (DES geometry) arefixed at 70 120583m depth 65 120583m width and 0 120583m length and

PF is fixed at 45KHz Figure 5 shows the 3D plots obtainedfrom these calculations allowing the process engineer tochoose the following milling conditions SS in the range of325ndash470mms and PI in the range of 74ndash91 for minimumerrors in DES depth and width

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

Journal of Applied Mathematics 9

Table 7 Root mean squared error 13

Volume Depth Width Length MRRRP 21648227 2689 581 579 1716673M5P 21077329 2301 451 553 1607633LR 19752189 2384 704 1473 1501593119896NN 19375662 2357 542 684 1584242SVR linear 1978945 2419 707 1623 1480032SVR radial basis 20077424 1885 438 535 1460337MLP 20764699 2298 47 539 1629243BG RP P 20021292 2149 497 508 1552679BG RP U 2052908 1998 466 49 1560389BG M5P P 20063657 2061 443 527 1529392BG M5P U 19754675 195 431 538 150221IB RP P 20276785 2205 487 514 1567757IB RP U 21938893 217 501 511 1591196IB M5P P 19783383 208 442 488 1531801IB M5P U 19515416 1965 43 478 1483083R2-L RP P 19176523 2227 494 507 1578847R2-L RP U 20636982 2171 525 529 1700431R2-L M5P P 18684392 2081 437 484 1503137R2-L M5P U 1815874 2051 434 484 1520901R2-S RP P 19390839 2303 51 518 1600743R2-S RP U 20045321 2121 511 531 1640124R2-S M5P P 17311772 2083 449 471 1507005R2-S M5P U 17391436 2087 452 472 1524581R2-E RP P 19252931 2259 502 507 1585403R2-E RP U 20592086 2144 522 521 1731845R2-E M5P P 17105622 2125 453 466 1507801R2-E M5P U 17294895 2132 455 467 1509023AR RP P 21575039 2551 542 556 1724982AR RP U 27446743 2471 583 615 1862828AR M5P P 20880584 2227 444 508 160765AR M5P U 19457279 1958 451 494 1553829RS 50 RP P 20052636 2658 568 774 1585427RS 50 RP U 20194624 2533 535 754 1566928RS 50 M5P P 20101327 266 538 1542 1540277RS 50 M5P U 19916684 2565 539 1545 1534984RS 75 RP P 19927003 2333 527 598 1586158RS 75 RP U 20784567 2161 504 625 1625164RS 75 M5P P 19964896 2355 465 829 1522787RS 75 M5P U 19742035 2211 46 839 1529571

Once the best data-mining technique for this industrialtask is identified the industrial implementation of theseresults can follow the procedure outlined below

(1) The best model is run to predict one output bychanging two input variables of the process in smallsteps and maintaining a fixed value for the otherinputs

(2) 3D plots of the output related to the two varied inputsshould be generatedThe process engineer can extractinformation from these 3D plots on the best millingconditions

As an example of this methodology the following casewas built Two Iterated Bagging ensembles with unprunedM5P as their base regressors are built for two outputs thewidth and the depth errors respectively Then the modelswere run by varying two inputs in small steps across thetest range pulse intensity (PI) and scanning speed (SS) Thiscombination of inputs and outputs presents an immediateinterest from the industrial point of view because in aworkshop the DES geometry is fixed by the customer andthe process engineer can only change three parameters of thelaser milling process (pulse frequency (PF) scanning speed

10 Journal of Applied Mathematics

Table 8 Root mean squared error 23

Volume error Depth error Width error Length error MRR errorRP 21687574 3066 591 562 1635557M5P 21450091 198 619 509 1600984LR 19752193 2392 676 1467 1496309119896NN 19369636 2369 498 66 1463651SVR linear 19781788 2417 71 1623 1478477SVR radial basis 20078596 1898 442 537 1450461MLP 20675361 2196 493 537 1738269BG RP P 20103757 2318 462 498 1512257BG RP U 20534153 216 44 487 153272BG M5P P 20059431 1966 532 508 151625BG M5P U 19757549 1919 522 515 1486054IB RP P 207848 2305 478 518 1552556IB RP U 21663114 2379 479 52 1653231IB M5P P 20145065 1973 444 471 1517767IB M5P U 19826122 1944 433 458 1493358R2-L RP P 2013655 2408 464 503 1544266R2-L RP U 20850097 2395 487 538 167783R2-L M5P P 1847995 2061 468 477 147324R2-L M5P U 18374085 2071 47 481 1481765R2-S RP P 19559243 2475 465 524 1610762R2-S RP U 20101769 2287 471 539 1587167R2-S M5P P 17277515 2109 453 472 1445998R2-S M5P U 17389235 2088 452 474 1449738R2-E RP P 19565769 2426 462 512 1564502R2-E RP U 20627571 2438 482 535 1672126R2-E M5P P 17219689 2161 457 475 1432448R2-E M5P U 17335689 2161 458 479 1437136AR RP P 2149783 2777 549 543 1620807AR RP U 27192655 2743 529 61 2057127AR M5P P 21132963 198 455 466 1599584AR M5P U 1954975 1955 478 482 1544838RS 50 RP P 20077517 2854 541 693 1521765RS 50 RP U 2023542 2684 524 694 1507478RS 50 M5P P 20105445 2591 598 1181 1538734RS 50 M5P U 1992461 2561 579 1175 1494089RS 75 RP P 19950189 2586 49 573 1516501RS 75 RP U 20646756 245 475 606 157435RS 75 M5P P 20016047 2205 576 666 151852RS 75 M5P U 19743178 2179 553 675 1472975

and pulse intensity) In viewof these restrictions the engineerwill wish to know the expected errors for the DES geometrydepending on the laser parameters that can be changedThe rest of the inputs for the models (DES geometry) arefixed at 70 120583m depth 65 120583m width and 0 120583m length and

PF is fixed at 45KHz Figure 5 shows the 3D plots obtainedfrom these calculations allowing the process engineer tochoose the following milling conditions SS in the range of325ndash470mms and PI in the range of 74ndash91 for minimumerrors in DES depth and width

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 10: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

10 Journal of Applied Mathematics

Table 8 Root mean squared error 23

Volume error Depth error Width error Length error MRR errorRP 21687574 3066 591 562 1635557M5P 21450091 198 619 509 1600984LR 19752193 2392 676 1467 1496309119896NN 19369636 2369 498 66 1463651SVR linear 19781788 2417 71 1623 1478477SVR radial basis 20078596 1898 442 537 1450461MLP 20675361 2196 493 537 1738269BG RP P 20103757 2318 462 498 1512257BG RP U 20534153 216 44 487 153272BG M5P P 20059431 1966 532 508 151625BG M5P U 19757549 1919 522 515 1486054IB RP P 207848 2305 478 518 1552556IB RP U 21663114 2379 479 52 1653231IB M5P P 20145065 1973 444 471 1517767IB M5P U 19826122 1944 433 458 1493358R2-L RP P 2013655 2408 464 503 1544266R2-L RP U 20850097 2395 487 538 167783R2-L M5P P 1847995 2061 468 477 147324R2-L M5P U 18374085 2071 47 481 1481765R2-S RP P 19559243 2475 465 524 1610762R2-S RP U 20101769 2287 471 539 1587167R2-S M5P P 17277515 2109 453 472 1445998R2-S M5P U 17389235 2088 452 474 1449738R2-E RP P 19565769 2426 462 512 1564502R2-E RP U 20627571 2438 482 535 1672126R2-E M5P P 17219689 2161 457 475 1432448R2-E M5P U 17335689 2161 458 479 1437136AR RP P 2149783 2777 549 543 1620807AR RP U 27192655 2743 529 61 2057127AR M5P P 21132963 198 455 466 1599584AR M5P U 1954975 1955 478 482 1544838RS 50 RP P 20077517 2854 541 693 1521765RS 50 RP U 2023542 2684 524 694 1507478RS 50 M5P P 20105445 2591 598 1181 1538734RS 50 M5P U 1992461 2561 579 1175 1494089RS 75 RP P 19950189 2586 49 573 1516501RS 75 RP U 20646756 245 475 606 157435RS 75 M5P P 20016047 2205 576 666 151852RS 75 M5P U 19743178 2179 553 675 1472975

and pulse intensity) In viewof these restrictions the engineerwill wish to know the expected errors for the DES geometrydepending on the laser parameters that can be changedThe rest of the inputs for the models (DES geometry) arefixed at 70 120583m depth 65 120583m width and 0 120583m length and

PF is fixed at 45KHz Figure 5 shows the 3D plots obtainedfrom these calculations allowing the process engineer tochoose the following milling conditions SS in the range of325ndash470mms and PI in the range of 74ndash91 for minimumerrors in DES depth and width

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 11: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

Journal of Applied Mathematics 11

Table 9 Root mean squared error 33

Volume relative error Depth relative error Width relative error Length relative errorRP 03 054 004 009M5P 03 033 004 009LR 027 043 005 013119896NN 027 038 004 01SVR linear 027 044 005 015SVR radial basis 028 029 004 01MLP 029 034 004 01BG RP P 028 04 004 008BG RP U 029 036 004 008BG M5P P 028 032 004 009BG M5P U 027 031 004 009IB RP P 029 038 004 009IB RP U 03 038 004 009IB M5P P 028 032 004 009IB M5P U 028 03 004 009R2-L RP P 027 039 004 008R2-L RP U 029 038 004 01R2-L M5P P 025 032 004 009R2-L M5P U 026 032 004 009R2-S RP P 027 039 004 009R2-S RP U 028 038 004 01R2-S M5P P 024 032 004 01R2-S M5P U 024 033 004 01R2-E RP P 027 04 004 009R2-E RP U 029 038 004 011R2-E M5P P 024 033 004 01R2-E M5P U 024 033 004 01AR RP P 03 047 004 009AR RP U 038 044 005 011AR M5P P 029 033 004 009AR M5P U 027 029 004 009RS 50 RP P 028 048 004 009RS 50 RP U 028 044 004 009RS 50 M5P P 028 043 004 009RS 50 M5P U 028 042 004 009RS 75 RP P 028 045 004 008RS 75 RP U 029 041 004 008RS 75 M5P P 028 036 004 009RS 75 M5P U 027 035 004 009

Table 10 Index notation for the nonensemble methods

1 2 3 4 5 6 7RP M5P LR 119896NN SVR linear SVR radial basis MLP

5 Conclusions

In this study extensive modeling has been presented withdifferent data-mining techniques for the prediction of geo-metrical dimensions and productivity in the laser milling

of microcavities for the manufacture of drug-eluting stentsExperiments on 316L Stainless Steel have been performed toprovide data for the models The experiments vary most ofthe process parameters that can be changed under industrialconditions scanning speed laser pulse intensity and laserpulse frequencymoreover 2 different geometries and 3 differ-ent sizes were manufactured within the experimental test toobtain informative data sets for this industrial task Besidesa very extensive analysis and characterization of the resultsof the experimental test were performed to cover all the

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 12: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

12 Journal of Applied Mathematics

Table 11 Index notation for the ensemble methods

BG IB R2-L R2-S R2-E AR RS 50 RS 75RP P 8 12 16 20 24 28 32 36RP U 9 13 17 21 25 29 33 37M5P P 10 14 18 22 26 30 34 38M5P U 11 15 19 23 27 31 35 39

Table 12 Summary table

Bestmethod

Statistically equivalent

Volume 263 4 5 6 7 11 14 15 16 18 1920 22 23 27 31 35 38 39

Depth 69 10 11 13 14 15 17 18 19 21 22

23 25 31

Width 152 6 7 9 10 11 12 14 18 19 22

23 26 27 30 31 39

Length 264 6 9 12 13 14 15 16 17 18 19

22 23 24 25 27 30 31 37

MRR 62 3 4 5 7 8 9 10 11 12 13

14 15 16 17 18 19 20 21 22 23 2425 26 27 30 31 32 33 34 35 36 38 39

Volumeerror 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depth error 62 10 11 14 15 18 19 22 23 26 27

30 31

Width error 156 8 9 12 13 14 16 17 18 19 2021 22 23 24 25 26 27 30 37

Length 154 8 9 14 16 18 19 22 23 26 27

30 31

MRR error 26

1 2 3 4 5 6 8 9 10 11 1213 14 15 16 17 18 19 20 21 22 2324 25 27 28 30 31 32 33 34 35 36

37 38 39

Volumerelative error 26

3 4 5 6 10 11 15 18 19 22 2327 31 34 35 36 38 39

Depthrelative error 6

2 10 11 14 15 18 19 22 23 26 2730 31

Widthrelative error 15 2 6 7 9 10 11 14 24 30 31 38 39

Lengthrelative error 9

1 2 8 10 11 12 13 14 15 16 2028 30 31 32 33 34 35 36 37 38 39

possible optimization strategies that industry might requirefor DESmanufacturing from high-productivity objectives tohigh geometrical accuracy in just one geometrical axis Bydoing so 14 data sets were generated each of 162 instances

Table 13 Methods ranking

Indexes MethodsNumber

ofdefeats

15 IB M5P U 031 AR M5P U 16 14 18 1922 23

SVR radial basis IB M5P P R2-L M5P P 2R2-L M5P U R2-S M5P P and R2-S M5PU

11 26 27 BGM5P U R2-EM5P P and R2-EM5P U 310 30 BG M5P P and AR M5P P 49 BG RP U 539 RS 75 M5P U 62 4 16 38 M5P 119896NN R2-L RP P and RS 75M5P P 712 13 35 IB RP P IB RP U and RS 50 M5P U 8

3 5 8 17 2024 25 34 36

LR SVR linear BG RP P R2-L RP U9R2-S RP P R2-E RP P R2-E RP U

RS 50M5P P and RS 75 RP P7 21 37 MLP R2-S RP U and RS 75 RP U 1032 33 RS 50 RP P and RS 50 RP U 111 28 RP and AR RP P 1229 AR RP U 14

The experimental test clearly outlined that the geometryof the feature to be machined will affect the performanceof the milling process The test also shows that it is noteasy to find the proper combination of process parametersto achieve the final part which makes it clear that thelaser micromilling of such geometries is a complex processto control Therefore the use of data-mining techniques isproposed for the prediction and optimization of this processEach variable to predict was modelled by regression methodsto forecast a continuous variable

The paper shows an exhaustive test covering 39 regressionmethod configurations for the 14 output variables A 10 times 10

cross-validation was used in the test to identify the methodswith a relatively better RMSE A corrected resampled t-testwas used to estimate significative differencesThe test showedthat ensemble regression techniques using M5P unprunedtrees gave a better performance than other well-establishedsoft computing techniques such as ANNs and Linear Regres-sion techniques SVR with Radial Basis Function Kernelwas also a very competitive method but required parametertuning We propose the use of an Iterated Bagging techniquewith an M5P unpruned tree as a base regressor because its

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 13: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

Journal of Applied Mathematics 13

22

24

26

28

3

32

34

36

times104

600500

400300

200 6070

8090

100

Forecast depth error f(intensity speed)

SpeedIntensity

times105

600

500

400300

200 6070

8090

100

Forecast width error f(intensity speed)

SpeedIntensity

45

4

35

3

25

Figure 5 3D plots of the predicted depth and widthrsquos errors fromthe Iterated Bagging ensembles

RMSE was never significantly worse than the RMSE of any ofthe other methods for any of the 14 variables

Future work will consider applying the experimentalprocedure to different polymers magnesium and otherbiodegradable and biocompatible elements as well as todifferent geometries of industrial interest other than DESsuch as microchannels Moreover as micromachining is acomplex process wheremany variables play an important rolein the geometrical dimensions of the machined workpiecethe application of visualization techniques such as scatterplot matrices and start plots to evaluate the relationshipsbetween inputs and outputs will help us towards a clearerunderstanding of this promising machining process at anindustrial level

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to express their gratitude to theGREP research group at the University of Girona and theTecnologico de Monterrey for access to their facilities duringthe experiments This work was partially funded throughGrants from the IREBID Project (FP7-PEOPLE-2009-IRSES-247476) of the European Commission and Projects TIN2011-24046 and TECNIPLAD (DPI2009-09852) of the SpanishMinistry of Economy and Competitiveness

References

[1] K Sugioka M Meunier and A Pique Laser Precision Micro-fabrication vol 135 Springer 2010

[2] S Garg and P W Serruys ldquoCoronary stents current statusrdquoJournal of the American College of Cardiology vol 56 no 10pp S1ndashS42 2010

[3] D M Martin and F J Boyle ldquoDrug-eluting stents for coronaryartery disease a reviewrdquo Medical Engineering and Physics vol33 no 2 pp 148ndash163 2011

[4] S Campanelli G Casalino and N Contuzzi ldquoMulti-objectiveoptimization of laser milling of 5754 aluminum alloyrdquo Optics ampLaser Technology vol 52 pp 48ndash56 2013

[5] J Ciurana G Arias and T Ozel ldquoNeural network modelingand particle swarm optimization (PSO) of process parametersin pulsed laser micromachining of hardened AISI H13 steelrdquoMaterials and Manufacturing Processes vol 24 no 3 pp 358ndash368 2009

[6] J ChengW Perrie S P Edwardson E Fearon G Dearden andK G Watkins ldquoEffects of laser operating parameters on metalsmicromachining with ultrafast lasersrdquo Applied Surface Sciencevol 256 no 5 pp 1514ndash1520 2009

[7] I E Saklakoglu and S Kasman ldquoInvestigation of micro-milling process parameters for surface roughness and millingdepthrdquo The International Journal of Advanced ManufacturingTechnology vol 54 no 5ndash8 pp 567ndash578 2011

[8] D Ashkenasi T Kaszemeikat N Mueller R Dietrich H JEichler and G Illing ldquoLaser trepanning for industrial applica-tionsrdquo Physics Procedia vol 12 pp 323ndash331 2011

[9] B S Yilbas S S Akhtar and C Karatas ldquoLaser trepanning ofa small diameter hole in titanium alloy temperature and stressfieldsrdquo Journal of Materials Processing Technology vol 211 no 7pp 1296ndash1304 2011

[10] R Biswas A S Kuar S Sarkar and SMitra ldquoAparametric studyof pulsed NdYAG laser micro-drilling of gamma-titaniumaluminiderdquo Optics amp Laser Technology vol 42 no 1 pp 23ndash312010

[11] L Tricarico D Sorgente and L D Scintilla ldquoExperimentalinvestigation on fiber laser cutting of Ti6Al4V thin sheetrdquoAdvanced Materials Research vol 264-265 pp 1281ndash1286 2011

[12] N Muhammad D Whitehead A Boor W Oppenlander ZLiu and L Li ldquoPicosecond laser micromachining of nitinol andplatinum-iridium alloy for coronary stent applicationsrdquoAppliedPhysics A Materials Science and Processing vol 106 no 3 pp607ndash617 2012

[13] H Meng J Liao Y Zhou and Q Zhang ldquoLaser micro-processing of cardiovascular stent with fiber laser cuttingsystemrdquo Optics amp Laser Technology vol 41 no 3 pp 300ndash3022009

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 14: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

14 Journal of Applied Mathematics

[14] T-C Chen and R B Darling ldquoLaser micromachining ofthe materials using in microfluidics by high precision pulsednear and mid-ultraviolet NdYAG lasersrdquo Journal of MaterialsProcessing Technology vol 198 no 1ndash3 pp 248ndash253 2008

[15] D Bruneel G Matras R le Harzic N Huot K Konig andE Audouard ldquoMicromachining of metals with ultra-short Ti-Sapphire lasers prediction and optimization of the processingtimerdquo Optics and Lasers in Engineering vol 48 no 3 pp 268ndash271 2010

[16] M Pfeiffer A Engel S Weiszligmantel S Scholze and G ReisseldquoMicrostructuring of steel and hard metal using femtosecondlaser pulsesrdquo Physics Procedia vol 12 pp 60ndash66 2011

[17] D Karnakis M Knowles P Petkov T Dobrev and S DimovldquoSurface integrity optimisation in ps-laser milling of advancedengineering materialsrdquo in Proceedings of the 4th InternationalWLT-Conference on Lasers in Manufacturing Munich Ger-many 2007

[18] H Qi and H Lai ldquoMicromachining of metals and thermalbarrier coatings using a 532 nm nanosecond ber laserrdquo PhysicsProcedia vol 39 pp 603ndash612 2012

[19] D Teixidor I Ferrer J Ciurana and T Ozel ldquoOptimization ofprocess parameters for pulsed laser milling of micro-channelson AISI H13 tool steelrdquo Robotics and Computer-IntegratedManufacturing vol 29 no 1 pp 209ndash219 2013

[20] IWitten andE FrankDataMining PracticalMachine LearningTools and Techniques Morgan Kaufmann 2nd edition 2005

[21] J Han M Kamber and J Pei Data Mining Concepts andTechniques Morgan Kaufmann 2006

[22] A J Torabi M J Er L Xiang et al ldquoA survey on artificialintelligence technologies inmodeling of high speed end-millingprocessesrdquo in Proceedings of the IEEEASME InternationalConference on Advanced Intelligent Mechatronics (AIM rsquo09) pp320ndash325 Singapore July 2009

[23] M Chandrasekaran M Muralidhar C M Krishna and U SDixit ldquoApplication of soft computing techniques in machiningperformance prediction and optimization a literature reviewrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 445ndash464 2010

[24] A K Choudhary J A Harding andMK Tiwari ldquoDataminingin manufacturing a review based on the kind of knowledgerdquoJournal of Intelligent Manufacturing vol 20 no 5 pp 501ndash5212009

[25] A K Dubey and V Yadava ldquoLaser beammachiningmdasha reviewrdquoInternational Journal ofMachine Tools andManufacture vol 48no 6 pp 609ndash628 2008

[26] B F Yousef G K Knopf E V Bordatchev and S K NikumbldquoNeural networkmodeling and analysis of the material removalprocess during laser machiningrdquo The International Journal ofAdvanced Manufacturing Technology vol 22 no 1-2 pp 41ndash532003

[27] S Campanelli G Casalino A Ludovico and C BonserioldquoAn artificial neural network approach for the control of thelaser milling processrdquo The International Journal of AdvancedManufacturing Technology vol 66 no 9ndash12 pp 1777ndash17842012

[28] C Jimin Y Jianhua Z Shuai Z Tiechuan and G DixinldquoParameter optimization of non-vertical laser cuttingrdquo TheInternational Journal of Advanced Manufacturing Technologyvol 33 no 5-6 pp 469ndash473 2007

[29] D Teixidor M Grzenda A Bustillo and J Ciurana ldquoMod-eling pulsed laser micromachining of micro geometries usingmachine-learning techniquesrdquo Journal of Intelligent Manufac-turing 2013

[30] N C Oza andK Tumer ldquoClassifier ensembles select real-worldapplicationsrdquo Information Fusion vol 9 no 1 pp 4ndash20 2008

[31] P Santos L Villa A Renones A Bustillo and JMaudes ldquoWindturbines fault diagnosis using ensemble classifiersrdquo in Advancesin Data Mining Applications and Theoretical Aspects vol 7377pp 67ndash76 Springer Berlin Germany 2012

[32] A Bustillo and J J Rodrıguez ldquoOnline breakage detectionof multitooth tools using classifier ensembles for imbalanceddatardquo International Journal of Systems Science pp 1ndash13 2013

[33] J-F Dıez-Pastor A Bustillo G Quintana and C Garcıa-Osorio ldquoBoosting projections to improve surface roughnessprediction in high-torque milling operationsrdquo Soft Computingvol 16 no 8 pp 1427ndash1437 2012

[34] A Bustillo E Ukar J J Rodriguez and A Lamikiz ldquoModellingof process parameters in laser polishing of steel componentsusing ensembles of regression treesrdquo International Journal ofComputer Integrated Manufacturing vol 24 no 8 pp 735ndash7472011

[35] A Bustillo J-F Dıez-Pastor G Quintana and C Garcıa-Osorio ldquoAvoiding neural network fine tuning by using ensemblelearning application to ball-end milling operationsrdquoThe Inter-national Journal of AdvancedManufacturing Technology vol 57no 5ndash8 pp 521ndash532 2011

[36] S Ferreiro B Sierra I Irigoien and E Gorritxategi ldquoDatamining for quality control burr detection in the drillingprocessrdquo Computers amp Industrial Engineering vol 60 no 4 pp801ndash810 2011

[37] B E Boser I M Guyon and V N Vapnik ldquoTraining algorithmfor optimal margin classifiersrdquo in Proceedings of the 5th AnnualWorkshop on Computational Learning Theory pp 144ndash152ACM Pittsburgh Pa USA July 1992

[38] R Beale and T Jackson Neural Computing An IntroductionIOP Publishing Bristol UK 1990

[39] A Sykes An Introduction to Regression Analysis Law SchoolUniversity of Chicago 1993

[40] X Wu V Kumar Q J Ross et al ldquoTop 10 algorithms in dataminingrdquo Knowledge and Information Systems vol 14 no 1 pp1ndash37 2008

[41] N Tosun and L Ozler ldquoA study of tool life in hot machin-ing using artificial neural networks and regression analysismethodrdquo Journal ofMaterials Processing Technology vol 124 no1-2 pp 99ndash104 2002

[42] A Azadeh S F Ghaderi and S Sohrabkhani ldquoAnnual elec-tricity consumption forecasting by neural network in highenergy consuming industrial sectorsrdquo Energy Conversion andManagement vol 49 no 8 pp 2272ndash2278 2008

[43] P Palanisamy I Rajendran and S Shanmugasundaram ldquoPre-diction of tool wear using regression and ANN models inend-milling operationrdquo The International Journal of AdvancedManufacturing Technology vol 37 no 1-2 pp 29ndash41 2008

[44] S Vijayakumar and S Schaal ldquoApproximate nearest neighborregression in very high dimensionsrdquo inNearest-Neighbor Meth-ods in Learning and Vision Theory and Practice pp 103ndash142MIT Press Cambridge Mass USA 2006

[45] L Kuncheva ldquoCombining classifiers soft computing solutionsrdquoin Pattern Recognition From Classical to Modern Approachespp 427ndash451 2001

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 15: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

Journal of Applied Mathematics 15

[46] J Yu L Xi and X Zhou ldquoIdentifying source (s) of out-of-control signals in multivariate manufacturing processes usingselective neural network ensemblerdquo Engineering Applications ofArtificial Intelligence vol 22 no 1 pp 141ndash152 2009

[47] T W Liao F Tang J Qu and P J Blau ldquoGrinding wheel con-dition monitoring with boosted minimum distance classifiersrdquoMechanical Systems and Signal Processing vol 22 no 1 pp 217ndash232 2008

[48] S Cho S Binsaeid and S Asfour ldquoDesign of multisensorfusion-based tool condition monitoring system in endmillingrdquoThe International Journal of Advanced Manufacturing Technol-ogy vol 46 no 5ndash8 pp 681ndash694 2010

[49] S Binsaeid S Asfour S Cho and A Onar ldquoMachine ensembleapproach for simultaneous detection of transient and gradualabnormalities in end milling using multisensor fusionrdquo Journalof Materials Processing Technology vol 209 no 10 pp 4728ndash4738 2009

[50] H Akaike ldquoA new look at the statistical model identificationrdquoIEEE Transactions on Automatic Control vol 19 no 6 pp 716ndash723 1974

[51] J RQuinlan ldquoLearningwith continuous classesrdquo inProceedingsof the 5th Australian Joint Conference on Artificial Intelligencevol 92 pp 343ndash348 Singapore 1992

[52] T M Khoshgoftaar E B Allen and J Deng ldquoUsing regressiontrees to classify fault-prone software modulesrdquo IEEE Transac-tions on Reliability vol 51 no 4 pp 455ndash462 2002

[53] T K Ho ldquoThe random subspace method for constructingdecision forestsrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 20 no 8 pp 832ndash844 1998

[54] TGDietterichl ldquoEnsemble learningrdquo inTheHandbook of BrainTheory and Neural Networks pp 405ndash408 2002

[55] L Breiman ldquoBagging predictorsrdquoMachine Learning vol 24 no2 pp 123ndash140 1996

[56] Y Freund andR E Schapire ldquoExperiments with a new boostingalgorithmrdquo in Proceedings of the 13th International ConferenceonMachine Learning (ICML rsquo96) vol 96 pp 148ndash156 Bari Italy1996

[57] Y Freund and R E Schapire ldquoA desicion-theoretic general-ization of on-line learning and an application to boostingrdquoin Computational Learning Theory pp 23ndash37 Springer BerlinGermany 1995

[58] L Breiman ldquoUsing iterated bagging to debias regressionsrdquoMachine Learning vol 45 no 3 pp 261ndash277 2001

[59] H Drucker ldquoImproving regressors using boosting techniquesrdquoin Proceedings of the 14th International Conference on MachineLearning (ICML rsquo97) vol 97 pp 107ndash115 Nashville Tenn USA1997

[60] J H Friedman ldquoStochastic gradient boostingrdquo ComputationalStatistics amp Data Analysis vol 38 no 4 pp 367ndash378 2002

[61] DWAhaDKibler andMKAlbert ldquoInstance-based learningalgorithmsrdquoMachine Learning vol 6 no 1 pp 37ndash66 1991

[62] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004

[63] C Cortes and V Vapnik ldquoSupport-vector networksrdquo MachineLearning vol 20 no 3 pp 273ndash297 1995

[64] J E Dayhoff and J M DeLeo ldquoArtificial neural networksopening the black boxrdquo Cancer vol 91 supplement 8 pp 1615ndash1635 2001

[65] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[66] W H Delashmit and M T Manry ldquoRecent developments inmultilayer perceptron neural networksrdquo in Proceedings of the7th Annual Memphis Area Engineering and Science Conference(MAESC rsquo05) 2005

[67] C Nadeau and Y Bengio ldquoInference for the generalizationerrorrdquoMachine Learning vol 52 no 3 pp 239ndash281 2003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 16: Research Article Modelling Laser Milling of Microcavities ...riubu.ubu.es/bitstream/10259/4243/1/Santos-JAM_2014.pdf · Research Article Modelling Laser Milling of Microcavities for

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of