8
1032 Stock Trading using PSEC and RSPOP: A novel evolving rough set-based neuro-fuzzy approach K. K. Ang and C. Quek Centre for Computational Intelligence, formerly the Intelligent Systems Laboratory School of Computer Engineering, Nanyang Technological University Blk N4, #Bla-02, Nanyang Avenue, Singapore 639798 kkang(&,pmail.ntu.edu.sg & ashcquekLa)dntu.edu.sg Abstract - This paper presents a novel evolving rough-set based neuro-fuzzy approach in stock trading using the method of forecasting stock price difference. The proposed Pseudo Self-Evolving Cerebellar (PSEC) algorithm and the Rough Set- based Pseudo Outer-Product (RSPOP) algorithm are used to construct a novel evolving rough set- based neuro-fuzzy system as the underlying forecast modeling tool to identify the fuzzy sets and fuzzy rules respectively. The proposed price difference forecast model is then incorporated with a forecast bottleneck free trading decision model to investigate achievable trading profit. Experimental results on real world stock market data showed that the proposed stock trading model with the evolving rough set-based neuro-fuzzy price difference forecast yielded significantly higher profits than the trading model without forecast. 1 Introduction Increasing applications of neural networks have been applied to technical financial forecasting [1-3] as they have the ability to learn complex non-linear mapping and self-adaptation for different statistical distributions. In recent years, a number of research applied neuro- fuzzy systems in financial engineering [4]. The main strength of neuro-fuzzy systems is that they are universal approximators [5] with the ability to solicit interpretable IF-THEN rules [6]. Some works that applied neuro-fuzzy systems in forecasting stock price are [4,7-11]. However, the strength of neuro-fuzzy systems involves two contradictory requirements in fuzzy modeling: interpretability verses accuracy. The fuzzy modeling research field is thus divided into two areas: linguistic fuzzy modeling that is focused on interpretability, mainly the Mamdani model [12]; and precise fuzzy modeling that is focused on accuracy, mainly the Takagi-Sugeno-Kang (TSK) model [6,13- 15]. The dilemma in the use of the Mamdani model is that an increased number of fuzzy rules is needed to achieve similar levels of accuracy in the TSK model, which in turn decreases interpretability. Recently, the proposed Rough Set-based Pseudo Outer-Product (RSPOP) fuzzy rule identification algorithm [16] integrates the sound concept of knowledge reduction from rough set theory with the Pseudo Outer-Product (POP) algorithm [17]. RSPOP performs feature selection through the reduction of redundant attributes and also reduces the fuzzy rules without redundant attributes. As there are many possible reducts for a given rule set, RSPOP uses an objective measure to identify the reducts that improve and not deteriorate the inferred consequence after attribute and rule reduction. Using this rough set approach, the RSPOP algorithm [16] is able to identify significantly fewer fuzzy rules and improves the interpretability and accuracy of the linguistic fuzzy model. This motivates the use of the RSPOP algorithm to identify the fuzzy rules of the proposed forecast model in this work. However, the application of the RSPOP algorithm assumed that the fuzzy sets are already generated from the numerical training data, which is a crucial step in fuzzy modeling. Although there are three different semantics of fuzzy sets [18], there are no measures available to evaluate the goodness or correctness of the membership functions generated [19]. Several methods of generating fuzzy membership functions are present in the literature, namely: heuristics, probability to possibility transformations, histograms, nearest neighbour methods, feed-forward neural networks, clustering, mixture decomposition, as well as evolutionary techniques [19]. The use of evolutionary techniques in generating fuzzy membership functions can be classified into two distinct approaches: * Evolutionary Approach - uses genetic algorithms in the search of optimal fuzzy rules and fuzzy membership functions [20,21]. * Evolving Approach - uses algorithms that evolves and learns the fuzzy rules and fuzzy membership functions from training data [22,23]. This paper presents a novel framework of using evolving rough set-based neuro-fuzzy approach in stock trading, namely the Pseudo Outer-Product based Fuzzy Neural Network using the Compositional Rule of Inference (POPFNN-CRI(S)) [24] with fuzzy sets generated using the proposed Pseudo Self-Evolving Cerebellar (PSEC) algorithm and fuzzy rules identified using the RSPOP algorithm [16]. Section 2 outlines the architecture of POPFNN-CRI(S). Section 3 presents the proposed PSEC algorithm. Sections 4 and 5 presents the price difference forecast method and the stock trading model used. Section 6 presents experimental trading results from real world stock market data. Section 7 concludes this paper. 0-7803-9363-5/05/$20.00 ©2005 IEEE. 1032

Stock trading using PSEC and RSPOP: a novel evolving rough set-based neuro-fuzzy approach

  • Upload
    a-star

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

1032

Stock Trading using PSEC and RSPOP:A novel evolving rough set-based neuro-fuzzy approach

K. K. Ang and C. QuekCentre for Computational Intelligence, formerly the Intelligent Systems Laboratory

School of Computer Engineering, Nanyang Technological UniversityBlk N4, #Bla-02, Nanyang Avenue, Singapore 639798

kkang(&,pmail.ntu.edu.sg & ashcquekLa)dntu.edu.sg

Abstract - This paper presents a novel evolvingrough-set based neuro-fuzzy approach in stocktrading using the method of forecasting stock pricedifference. The proposed Pseudo Self-EvolvingCerebellar (PSEC) algorithm and the Rough Set-based Pseudo Outer-Product (RSPOP) algorithmare used to construct a novel evolving rough set-based neuro-fuzzy system as the underlying forecastmodeling tool to identify the fuzzy sets and fuzzyrules respectively. The proposed price differenceforecast model is then incorporated with a forecastbottleneck free trading decision model to investigateachievable trading profit. Experimental results onreal world stock market data showed that theproposed stock trading model with the evolvingrough set-based neuro-fuzzy price differenceforecast yielded significantly higher profits than thetrading model without forecast.

1 IntroductionIncreasing applications of neural networks have beenapplied to technical financial forecasting [1-3] as theyhave the ability to learn complex non-linear mappingand self-adaptation for different statistical distributions.In recent years, a number of research applied neuro-fuzzy systems in financial engineering [4]. The mainstrength of neuro-fuzzy systems is that they areuniversal approximators [5] with the ability to solicitinterpretable IF-THEN rules [6]. Some works thatapplied neuro-fuzzy systems in forecasting stock priceare [4,7-11]. However, the strength of neuro-fuzzysystems involves two contradictory requirements infuzzy modeling: interpretability verses accuracy. Thefuzzy modeling research field is thus divided into twoareas: linguistic fuzzy modeling that is focused oninterpretability, mainly the Mamdani model [12]; andprecise fuzzy modeling that is focused on accuracy,mainly the Takagi-Sugeno-Kang (TSK) model [6,13-15]. The dilemma in the use of the Mamdani model isthat an increased number of fuzzy rules is needed toachieve similar levels of accuracy in the TSK model,which in turn decreases interpretability.

Recently, the proposed Rough Set-based PseudoOuter-Product (RSPOP) fuzzy rule identificationalgorithm [16] integrates the sound concept ofknowledge reduction from rough set theory with thePseudo Outer-Product (POP) algorithm [17]. RSPOP

performs feature selection through the reduction ofredundant attributes and also reduces the fuzzy ruleswithout redundant attributes. As there are manypossible reducts for a given rule set, RSPOP uses anobjective measure to identify the reducts that improveand not deteriorate the inferred consequence afterattribute and rule reduction. Using this rough setapproach, the RSPOP algorithm [16] is able to identifysignificantly fewer fuzzy rules and improves theinterpretability and accuracy of the linguistic fuzzymodel. This motivates the use of the RSPOP algorithmto identify the fuzzy rules of the proposed forecastmodel in this work.

However, the application of the RSPOP algorithmassumed that the fuzzy sets are already generated fromthe numerical training data, which is a crucial step infuzzy modeling. Although there are three differentsemantics of fuzzy sets [18], there are no measuresavailable to evaluate the goodness or correctness of themembership functions generated [19]. Several methodsof generating fuzzy membership functions are presentin the literature, namely: heuristics, probability topossibility transformations, histograms, nearestneighbour methods, feed-forward neural networks,clustering, mixture decomposition, as well asevolutionary techniques [19]. The use of evolutionarytechniques in generating fuzzy membership functionscan be classified into two distinct approaches:

* Evolutionary Approach - uses genetic algorithmsin the search of optimal fuzzy rules and fuzzymembership functions [20,21].

* Evolving Approach - uses algorithms that evolvesand learns the fuzzy rules and fuzzy membershipfunctions from training data [22,23].

This paper presents a novel framework of usingevolving rough set-based neuro-fuzzy approach in stocktrading, namely the Pseudo Outer-Product based FuzzyNeural Network using the Compositional Rule ofInference (POPFNN-CRI(S)) [24] with fuzzy setsgenerated using the proposed Pseudo Self-EvolvingCerebellar (PSEC) algorithm and fuzzy rules identifiedusing the RSPOP algorithm [16]. Section 2 outlines thearchitecture of POPFNN-CRI(S). Section 3 presentsthe proposed PSEC algorithm. Sections 4 and 5presents the price difference forecast method and thestock trading model used. Section 6 presentsexperimental trading results from real world stockmarket data. Section 7 concludes this paper.

0-7803-9363-5/05/$20.00 ©2005 IEEE. 1032

1033

2 POPFNN-CRI(S)Pseudo Outer-Product based Fuzzy Neural Network(POPFNN) is a family of neuro-fuzzy systems that isbased on the linguistic fuzzy model [17]. Threemembers ofPOPFNN exists in the literature: POPFNN-CRI(S) [24] which is based on the Compositional RuleofInference, POPFNN-TVR [25] which is based on theTruth Value Restriction, and POPFNN-AARS(S) [26]which is based on the Approximate AnalogicalReasoning Scheme. The POPFNN architecture is afive-layer neural network as shown in Figure 1. Eachlayer performs a specific fuzzy operation. In layer I,input node I' represents the ieh input linguistic variable

[18] of the input xi. In layer II, input-label node ILl'jrepresents the/h linguistic label of the input xi using themembership function,u1j(x). In layer III, rule node RIk"represents the kth IF-THEN fuzzy rule with weights

,antecedent cZ E C and consequent dm E Dk . InWk,Iatcdntc kmk Ilayer IV, output-label node OLEMV represents the Ithlinguistic label of the output Ym. In layer V, outputnode Ov represents the output linguistic variable of theoutput Ym.

Y, Ym Yns

-t) X ( ; Output Linguisfic0 0 LayerV

(Output nodes)

xl X!

Figure 1: Structure ofPOPFNN using RSPOP

The learning process of POPFNN consists of threephases, namely fuzzy membership generation, fuzzyrule identification and supervised learning for fine-tuning. Various fuzzy membership learning can beused in POPFNN: Learning Vector Quantization (LVQ)[27], Fuzzy Kohonen Partitioning (FKP) [24] orDiscrete Incremental Clustering (DIC) [28]. ThePseudo Outer- Product (POP) algorithm [25], its variantLazyPOP [17], or the Rough Set-based Pseudo Outer-Product (RSPOP) algorithm [16] is used in thePOPFNN family of networks to identify fuzzy rules.

3 Pseudo Self-Evolving Cerebellar (PSEC)The human brain is precisely wired to process sensoryinformation into coherent patterns of activity that formthe basis of our perception, thoughts and actions; butthis precise wiring is not fully developed at birth [29].The initially coarse pattern of connections issubsequently refined by activity dependant mechanismsthat match precisely the pre-synaptic neurons. Thus thedevelopment of our nervous system proceeds in twooverlapping stages. In the first stage, the basicarchitecture and coarse connection patterns are laid outwithout any activity-dependent processes. In thesecond stage, this initial architecture is refined inactivity-dependent ways. The coarse connectionpatterns begin from a large overproduction of neuronswhere active neurons stabilize through the uptake oftrophic factors whereas their unsuccessful competitorsdie. These findings in neuroscience [29] inspired theidea of developing an evolving algorithm with thesetwo overlapping stages similar to the development ofour nervous system.

Two exisiting algorithms that offered additionalinspiration are: Cerebellar Model ArticulationController (CMAC) in [30,31] and density-basedclustering algorithm DBSCAN in [32]. CMACmodeled the high degree of regularity present in theorganization of the cerebellar cortex [29] and offersnumerous advantages from the implementation point ofview. DBSCAN is designed to discover clusters ofarbitrary shape. These two algorithms offered theinsight of developing regularly spaced initial coarsepattern and as well as the insight of using density as atorphic factor in the development of the Pseudo Self-Evolving Cerebellar (PSEC) algorithm for generatingfuzzy membership functions.

PSEC Algorithm* Step 1: Initialization

Given data set X={xl,x2,...Xk,...xn}1IR, select

the initial number of neurons m, the pseudopotential threshold /, the terminating criterion eand the maximum number of iterations Tm,.Initialize the learning constant cx=lln, and thenumber of clusters c=0.

* Step 2: Construct Cerebellar structureConstruct the initial cerebellar structure with mregularly spaced neurons V = {v(o,( v()} and

initialise pseudo weights VP = {v2,v',... v. } = 0

Compute gapx using (1)maxx- minxgapx= (1)

mwhereminx is the minimum value of xk;maxx is the maximum value of xk.

1033

xi

1034

* Step 3: Structural learningPerform one-pass pseudo weight learning todiscover density distribution of training data.For k=1..n:a. Find the winner neuron w and the runner-up

neuron r with the minimum distance from thedata xk using (2).

min(D = xk- v(°)) (2)

b. Update pseudo weights of winner neuron wand runner-up neuron r using (3) and (4).

VP + 1- )a if r.0VP = 1 gap,( vP +a if r=0

I( |DWI ) ifr 0gapx

(3)

(4)

whereDw is the distance between the winning

neuron w and the data xk;vP is the pseudo weight neuron w;

VP is the pseudo weight of neuron r.

End for k* Step 4: Evolve Cerebellar structure

Identify neurons with high trophic factors fromtheir pseudo weights and remove the remainingneurons.

a. Initialise the set of trophic neurons V/')=0,high trophic neuron i=0 and lowest pseudoweight da=0.

Forj=l..m:b. Find a neuron i with the highest pseudo weight

using (5).

ifVJP >(d+,8)then i j, d=VjP (S)where

p is the pseudo potential thresholdinitialized in step 1.

c. If the neighbouring neurons to the right ofneuron i exhibit a characteristic drop in pseudoweight, then neuron i is identified as theneuron with high trophic factor. When thisoccurs, assign the weight vi of the neuron i tothe set of neurons with high trophic factorsV(l) using (6).if i.0 and vP < (d-,8) (6)

thenvi eVV(),c=c+l,i=0 (6)

d. Update the lowest pseudo weight detectedusing (7)if i = 0 and vjP < d then d = vP (7)

End for j

* Step 5: Incremental LearningIteratively refine the surviving neurons weights fiusing LVQ [27].For T=1..Tmax

a. Initialise e(-)=O.For k = ln:

b. Find the winner w using (8).

x, - v(T) ||= mmin xk - v,1)forj =1..c, xk E X,vi EV'

c. Update weights of the winner w with (9).v$1) = v° +a(xk-V2)

d. Compute e(T+') using (10).e(TA-)=e(T)±x2e+l=e()+ llxk - v(7W)

End for ke. Compare e(7+I) and e(7) using (11).

Ae(T+l) = e(T+l) _e(T)f. If Ae(T+l).c stop

(8)

(9)

(10)

(1 1)

End for T

In step 1, the learning constant a is initialized basedon the number of training tuples n. In step 2, PSECconstructs a cerebellar architecture with m regularlyspaced neurons that span the input space. This stepmodels the first-stage development process of ournervous system. Next in step 3, PSEC performsstructural learning by performing a one-pass pseudoweight learning to obtain a density distribution of thetraining data. In step 4, PSEC evolves the cerebellarstructure by identifying surviving neurons with hightropic factors whose pseudo weights form convexdensity peaks while the remaining neurons areremoved.. This step models the second-stagedevelopment process of our nervous system Thesesurviving neurons' weights then form the initial weightsfor further weight refinements. The operation of step 5is fundamentally the LVQ algorithm [27].

4 Stock Price ForecastTime series prediction has a diverse range ofapplications [33], and forecasting stock price is onesuch application. Time series prediction is formulatedas: given values y(T- n) ... y(T); predict y(T+l) as

y-(T +1) . Time series prediction using neural networksis referred to as time-delayed approach [7] to discern itfrom recurrent networks with feedback connections.Incidentally, a number of research publications usingtime-delayed neural networks to forecast stock price didnot compare against the Random Walk Model (seeAppendix A in [1]). This is because the time-delayedapproach performs direct forecast on a stock priceseries that is usually non-stationary. A time series is

1034

1035

stationary, which is an important property in time seriesanalysis, when the mean value of the series remainsconstant over the time series. Since many econometrictime series are non-stationary, the time-delayed priceforecast approach tend to include linear trends and thusdeterministic shifts in the out-of-sample forecast periodtend to exacerbate forecast errors [34] when comparedagainst the Random Walk Model. Recent developmentin the theory of economic forecasting in [34,35]showed that transforming non-stationary econometrictime series to stationarity by differencing andcointegration helped robustify the price forecasts andavoid deterministic shifts in the out-of-sample forecastswhich exacerbate the forecast errors. This motivatesthe investigation of forecasting of the price differenceinstead of price level. The approach in forecasting theprice difference is formulated as: given valuesy(T-n) ... y(T), compute Ay(T - n + 1) ... Ay(T)and predict the change in y (T + 1) represented as

A5'(T+1) where A is the difference operator. This

approach uses the difference operator A compared totime-delayed price forecast. Therefore, we called thisthe Time-delayed Price Difference Forecast approach.

5 Stock Trading SystemIn order to assess the trading performance of theproposed approach, a realistic yet simple computationof profits or losses of a generic trading decision modelis adapted from [36] as shown in Figure 2.

Tast=ao1x ostS[ ,

Figure 2: A simple stock trading decision model

The price value of a security is represented as a timeseries y(T), where y represents a value at time instant T.The action of the trading system is assumed to be oneof short, neural or long, and this action is respectivelyrepresented by F(T) where Fe-{-1,0,1} [36]. Thetrading system return is subsequently modeled by amultiplicative return given in (12) where the transactioncost is assumed to be a fraction 6 of the transactedprice value [36]. This multiplicative return representsthe portfolio end value at time instant T.

R(T) = {1+F(T-1)r(T)}{1-3IF(T)-F(T-1)|} (12)where

r(T) - (T) 1y(T-1)

F(T) is the action from the trading system;a5 is the transaction rate.

A number of techniques can be used to generate buyand sell signals using technical analysis techniques.One of the simplest and popular trading rules fordeciding when to buy and sell in a security market isthe moving average rule [37,38]. A widely used variantof the moving average rule is the Moving AverageConvergence/ Divergence (MACD) oscillator originallydeveloped by Gerald Appel [39]. MACD uses thecrossovers of a fast signal given in (13) and a slowsignal given in (14) to indicate a buy or sell signal. TheExponential Moving Average (EMA) of a price series isgiven in (15) [39]. The fast signal is the differencebetween the rlong EMA and the -short EMA of y(T)

using the closing price where 1long > z short -The slow

signal is the z5slow EMA of the fast signal.

Fast (T)=EMAAY o (T)-EA4 (T) (13)

Slow(T) = EAFas' (T) (14)

EMAr (T) = Kz(T) + (1-K) EMAr (T-1) (15)where

2

t is the number of time instance of the movingaverage;

y (T) is the price at the current time instance T;

EAMr (T) is the rEMA of time instance T.

Instead of using the crossover of the fast and slowsignal to generate the buy/sell signal, simpler movingaverage trading rules can be generated using just theslow signal given in (16), where the slow signal asgiven in (14) is a moving average of the fast signal andthe fast signal is the two moving averages of the pricelevel: a long-period average and a short-period average[37].

F(T) = sign(Slow(T)) (16)

whereF(T) is the action from the trading module for time

instant T;Slow(T) is given in (14).

Figure 3: A trading decision model on technicalanalysis approach with forecast adapted from 1361

1035

1036

A block diagram for a generic trading decisionmodel based on the technical analysis approach withforecast is shown in Figure 3 [36]. In such a system,the forecasting module is trained using supervisedlearning on a training data set and is then used toforecast an out-sample data set. The forecasts are thenused as input to a trading module to generate tradingsignals. This common practice of using only theforecasts as input to the trading module results in lossof information, in effect producing a forecast bottleneckthat may lead to suboptimal performance [36]. Thisforecast bottleneck, formally defined in [36], isproduced since the price history of the input seriesy (T) are discarded in the generation of trading signals.Comparing the models in Figures 2 and 3, the formerallows the generation of trading signals from simpletrading rules like the moving average trading ruleswhile the latter only allows the generation of tradingrules based on the forecasts. This forecast bottlenecktherefore impedes the use of the price history of theinput series y (T) in the generation of trading signals,hence encumbers the trading performance of theforecast module. This motivates the design of aforecast bottleneck free trading decision model that inturn facilitates the research using a synergy of variousforecasting methods and trading decision methods.A novel evolving rough set-based neuro-fuzzy

forecast bottleneck free stock trading decision model isproposed and shown in Figure 4. We called theproposed trading decision model Stock Trading usingPSEC and RSPOP, which is based on the technicalanalysis approach. Comparing the models in Figures 3and 4, the latter simply includes an additional signal ofthe input series y (T) to the trading module and uses

the time-delayed price difference forecast approach.Although this model appears to be an overly simplisticsolution to resolve the forecast bottleneck, this modelfacilitates the synergy of the time-delayed pricedifference forecast approach with the use of the movingaverage trading rules. The computation of the tradingsignal F(T) using both the forecast value y'(T + 1)

and y (T) is given in (20). The computation of the

moving average signals is given in (17)-(19).

Figure 4: Stock trading using PSEC and RSPOP: Aforecast bottleneck free trading decision model

Fast '(T + 1) = EMAy (T + 1) - EMAy (T + 1)short on

Slow'(T + 1) = EMAFaSt (T + 1)

EMA4 (T + 1) = Ky'(T + 1) + (1-K)EMAY (T)F(T) = sign(Slow'(T + 1))

where

2

(17)

(18)

(19)

(20)

r is the number of time instance of the movingaverage;

F(T) is the trading action for time instant T;y '(T + 1) is the forecast price value.

6 Experimental ResultsThis section presents experimental results on thetrading of stock in real world market using theproposed evolving rough-set based neuro-fuzzy stocktrading decision model. The forecast model isconstructed using POPFNN-CRI(S) [24] with fuzzymembership functions generated using PSEC and fuzzyrules identified using RSPOP [16]. The tradingperformance is benchmarked against the tradingdecision model without forecast and the tradingdecision model with ideal forecast using known pricevalue of the next time instance. The tradingperformance is also benchmarked against similartrading decision models but with forecast modelsconstructed using POPFNN-CRI(S) with fixed fuzzymembership functions and fuzzy rules identified usingRSPOP [16], as well as forecast model constructedusing Dynamic Evolving Neural-Fuzzy InferenceSystem (DENFIS) [23].

The experimental price series consists of 5917 pricevalues of Neptune Orient Lines obtained from YahooFinance web site on the counter N03.SI from theperiod of 2nd January 1980 to 1 st March 2005. The in-sample training data set is constructed using 10 time-delay elements consisting of Y(T) to Y(T-9) fromthe first 4000 data points. The out-sample test data setis constructed using the more recent 1906 data points.Trading signals are generated using heuristically chosenLong=15, Short= 10, Slow=7 and the multiplicativeprofits are calculated with a transaction cost of S =0.2%. The moving average trading rule is modified bythe introduction of a 1% moving average band. Theintroduction of this hysteresis band is to reduce thenumber of trading signals by eliminating the"whiplash" signals when the short-period and long-period moving averages are close [37]. The out-sampleprice series, the trading signals and the multiplicativereturns are presented in Figures 5-6.

The experimental results show that the normaltrading model yielded a portfolio end value of R(T) =5.6477, the trading model with RSPOP forecasting

1036

1037

yielded R(T)= 10.2560, the trading model with DENFISforecasting yielded R(T) = 16.4073, the trading modelwith PSEC-RSPOP forecasting yielded R(I) = 17.7622,and the trading model with ideal forecasting yieldedR(T) = 30.0509. The results show that all tradingmodel with forecast yielded higher returns than thenormal trading model without forecast and also yieldedlower returns than the trading model with ideal forecast,which is intuitively consistent since it is impossible toforecast with absolute accuracy on real world data.Forecasting with POPFNN-CRI(S) Mamdani modelusing fixed membership functions and RSPOPidentified 10 fuzzy rules, forecasting with DENFIS

4 =

i- - - - - Stow eA

3.5 L t- - - Fast EMA

D 2.5

.0a.

1,sL

TSK model identified 9 fuzzy rules, and forecastingwith POPFNN-CRI(S) Mamdani model using theproposed PSEC algorithm to generate the fuzzy setsand RSPOP identified 7 fuzzy rules. The fixed fuzzysets used and the fuzzy sets identified using PSEC areshown in Figure 7. The fuzzy rules identified usingPSEC and RSPOP are shown in Table 1. Theexperimental results showed that the proposed evolvingrough set-based neuro-fuzzy approach using PSEC andRSPOP uses fewer fuzzy rules and yielded a higherportfolio end value compared against using RSPOPalone as well as the DENFIS TSK model.

Without forecastWth forecast

219(751?SjW' W~~~~~~~~~L

0.5 [~t

4000 4200 44004l00 lmeT

4600 4800 5000Time T

~~~1 LI~~~~~1P5200 5400 5600 5

II520'0 5400 5600 5800

Figure 5: Price and trading actions on NOL using the evolving rough set-based neuro-fuzzy approach

- - - Ideal forecast

35 0 - Without forecastWIth PSEC-FSPCPforecastWith DEFIS forecast

30 With RSPOPforecast

t 25

> 20

1L

100

ol~~~~~~~~~~~~~~~~~~~. l.l --

4000 4200 4400 4600

7Ge' v f,

l. ~. .

4800 5000Time T

5200 5400 5600 5800 6000

Figure 6: Experimental Trading profit on NOL

0.9 X

0.87

0.76

005 dec no change inc

16 0.4h

2¢0.3 _/Ylo~ ~ ~ ~~~~~~on

002

0'

-0.8 -0.6 -04 -0.2 0 0.2 0.4 0.6 0.8Gaussian ftizy sets tor inputs y(T) toy(T-9)

(a)

0.9 - ! / / \ I

0.6

05 dec (smaltdec :nchrge) smaSc isc

16 02i '.4',,002

-0.8 -0.6 -04 -0.2 0 0.2 0.4 0.6 0.8Gaussiansuysets forouput A(T+1)

(b)

0.9

0.8

t 0.7c0' 0.6-EE 0-5 -

0.4

E 0.30

0.2

dec X small dec

I1y()

.1 /

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4Gaussiantszysets forintpulsYt7)to y(T-9) and otutyT+I1)

(c)

Figure 7: (a) Fixed input fuzzy membership functions and (b) output fuzzy membership functions used in RSPOPforecast (c) fuzzy membership functions used in PSEC-RSPOP forecast

1037

'-1

6000

I 1

4u

Price

q

II 1,0,v 11

;'.I

'I';. *

O.1

inc

1038

Rule l: if (y(T-3) is inc) and (y(T-2) is inc) and (y(T) is dec)then y(T+1) is inc

Rule 2: if (y(T-3) is dee) and (y(T-2) is inc) and (y(T) is small dec)then y(T+1) is inc

Rule 3: if (y(T-3) is inc) and (y(T-2) is inc) and (y(T) is small dec)then y(T+1) is dec

Rule 4: if (y(T-3) is inc) and (y(T-2) is small dec) and (y(T) is inc)then y(T+1) is inc

Rule 5: if (y(T-3) is inc) and (y(T-2) is inc) and (y(T) is inc)then y(T+1) is inc

Rule 6: if (y(T-3) is small dec) and (y(T-2) is dec) and (y(T) is inc)then y(T+1) is small dec

Rule 7: if (y(T-3) is small dec) and (y(T-2) is inc) and (y(T) is inc)then y(T+1) is inc

Table 1: 7 Fuzzy rules identified using RSPOP

7 ConclusionsA novel evolving rough set based neuro-fuzzy stocktrading decision model called Stock Trading using PSECand RSPOP is proposed in this paper. Experimentalresults on real world stock market data are presentedusing the proposed stock trading decision model. Thenovel trading decision model circumvents the forecastbottleneck and synergizes the time-delayed pricedifference forecast approach with simple moving averagerules for generating trading signals. The tradingperformances are benchmarked against trading decisionmodel without forecasting, trading decision model usingthe price value of the next time instance (trading modelwith ideal forecasting), trading decision model withRSPOP forecasting and trading decision model withDENFIS forecasting. Experimental results showed thatthe proposed evolving rough set-based neuro-fuzzy stocktrading decision model is able identify fewer fuzzy rulesand yield higher profits than existing neuro-fuzzy andrough set-based neuro-fuzzy approach, thus improvingthe interpretability as well as accuracy of fuzzy modelingin the area of financial forecasting.

The potential of the proposed stock trading usingPSEC and RSPOP model discussed in this paper isnotable as it demonstrates the feasible application ofPOPFNN-CRI(S) in the area of financial prediction.Although the proposed approach does not assure profitsfrom all real world market stocks, this paper presented aforecast bottleneck free trading decision model thatopened new opportunities for the synergy of this approachwith various trading decision schemes. Design ofintelligent trading decision schemes that synergize withforecasting to yield optimal trading performance on realworld stock market data now appears to be a promisingresearch area.

8 Bibliography

[1] Adya, M. and Collopy, F., How effective are neuralnetworks at forecasting and prediction? A reviewand evaluation Journal ofForecasting, vol. 17, no.5-6, pp. 481-495, 1998.

[2] Trippi, R. R. and Turban, E. Neural Networks inFinance and Investing: Using Artificial Intelligenceto Improve Real World Performance, Chicago:Probus Pub. Co., 1993.

[3] Refenes, A.-P. Neural networks in the capitalmarkets, New York: Wiley, 1995.

[4] Trippi, R. R. and Lee, J. K. Artificial Intelligence inFinance & Investing: State-Of-The-Art Technologiesfor Securities Selection and Portfolio Management,Chicago: Irwin Professional Publishing, 1996.

[5] Tikk, D., Koczy, L. T., and Gedeon, T. D., Asurvey on universal approximation and its limits insoft computing techniques International Journal ofApproximate Reasoning, vol. 33, no. 2, pp. 185-202, Jun, 2003.

[6] Guillaume, S., Designing fuzzy inference systemsfrom data: An interpretability-oriented review IEEETransactions on Fuzzy Systems, vol. 9, no. 3, pp.426-443, 2001.

[7] Saad, E. W., Prokhorov, D. V., and Wunsch II, D.C., Comparative study of stock trend predictionusing time delay, recurrent and probabilistic neuralnetworks IEEE Transactions on Neural Networks,vol. 9, no. 6, pp. 1456-1470, 1998.

[8] Pantazopoulos, K. N., Tsoukalas, L. H., Bourbakis,N. G., Brun, M. J., and Houstis, E. N., Financialprediction and trading strategies using neurofuzzyapproaches IEEE Transactions on Systems, Manand Cybernetics, Part B, vol. 28, no. 4, pp. 520-531, 1998.

[9] Nishina, T. and Hagiwara, M., Fuzzy inferenceneural network Neurocomputing, vol. 14, no. 3,pp. 223-239, Feb 28, 1997.

[10] Lin, C.-S., Khan, H. A., and Huang Chi-Chung,CIRJE, Faculty of Economics, University of Tokyo.,CIRJE-F- 165, 2002.

[11] Nakanishi, H., Turksen, I. B., and Sugeno, M., Areview and comparison of six reasoning methodsFuzzy Sets and Systems, vol. 57, no. 3, pp. 257-294, Aug 10, 1993.

[12] Mamdani, E. H. and Assilian, S., An experiment inlinguistic synthesis with a fuzzy logic controllerInternational Journal of Man-Machine Studies, vol.7, no. 1, pp. 1-13, 1975.

[13] Takagi, T. and Sugeno, M., Fuzzy identification ofsystems and its applications to modeling and controlIEEE Transactions on Systems, Man andCybernetics, vol. 15, no. 1, pp. 116-132, 1985.

[14] Casillas, J., Cordon, O., Herrera, F., and Magdalena,L. Interpretability Issues in Fuzzy Modeling, Berlin:Springer-Verlag, 2003.

1038

1039

[15] Sugeno, M. and Kang, G. T., Structureidentification of fuzzy model Fuzzy Sets andSystems, vol. 28, no. 1, pp. 15-33, Oct, 1988.

[16] Ang, K. K. and Quek, C., RSPOP: Rough Set-Based Pseudo Outer-Product Fuzzy RuleIdentification Algorithm Neural Computation, vol.17, no. 1, pp. 205-243, Jan 1, 2005.

[17] Quek, C. and Zhou, R. W., The POP learningalgorithms: reducing work in identifying fuzzy rulesNeural Networks, vol. 14, no. 10, pp. 1431-1445,Dec, 2001.

[18] Zadeh, L. A., The concept of a linguistic variableand its application to approximate reasoning-I,II,IIIInformation Sciences, vol. 8(3), 199-249; 8(4), 301-357; 9(1), pp. 43-80, 1975.

[19] Medasani, S., Kim, J., and Krishnapuram, R., Anoverview of membership function generationtechniques for pattern recognition InternationalJournal ofApproximate Reasoning, vol. 19, no. 3-4, pp. 391-417, 1998.

[20] Yang Yupu, Xu Xiaoming, and Wengyuan, Z.,Real-time stable self-learning FNN controller usinggenetic algorithm Fuzzy Sets and Systems, vol. 100,no. 1-3, pp. 173-178, Nov 16, 1998.

[21] Ishibuchi, H. and Yamamoto, T., Fuzzy ruleselection by multi-objective genetic local searchalgorithms and rule evaluation measures in datamining Fuzzy Sets and Systems, vol. 141, no. 1,pp. 59-88, Jan 1, 2004.

[22] Kasabov, N., Evolving fuzzy neural networks forsupervised/unsupervised online knowledge-basedlearning IEEE Transactions on Systems, Man andCybernetics, Part B, vol. 31, no. 6, pp. 902-918,2001.

[23] Kasabov, N. K. and Song, Q., DENFIS: dynamicevolving neural-fuzzy inference system and itsapplication for time-series prediction IEEETransactions on Fuzzy Systems, vol. 10, no. 2, pp.144-154, 2002.

[24] Ang, K. K., Quek, C., and Pasquier, M., POPFNN-CRI(S): pseudo outer product based fuzzy neuralnetwork using the compositional rule of inferenceand singleton fuzzifier IEEE Transactions onSystems, Man and Cybernetics, Part B, vol. 33, no.6, pp. 838-849, 2003.

[25] Zhou, R. W. and Quek, C., POPFNN: A PseudoOuter-product Based Fuzzy Neural Network NeuralNetworks, vol. 9, no. 9, pp. 1569-158 1, Dec, 1996.

[26] Quek, C. and Zhou, R. W., POPFNN-AAR(S): apseudo outer-product based fuzzy neural network

[27] Kohonen, T. Self-Organization and AssociativeMemory, Berlin, New York: Springer-Verlag, 1989.

[28] Tung, W. L. and Quek, C., "DIC: A Novel DiscreteIncremental Clustering Technique for the Derivationof Fuzzy Membership Functions," Proceedings ofthe 7th Pacific Rim International Conference onArtificialIntelligence, pp. 178-187, 2002.

[29] Kandel, E. R., Schwartz, J. H., and Jessell, T. M.Essentials ofneural science and behavior, Norwalk,CT: Appleton & Lange, 1995.

[30] Albus, J. S., A new approach to manipulatorcontrol: The Cerebellar Model ArticulationController (CMAC) Journal of Dynamic Systems,Measurement and Control, Transactions of theASME, vol. 97, pp. 270-277, 1975.

[31] Albus, J. S., Data storage in the cerebellar modelarticulation controller (CMAC) Journal ofDynamicSystems, Measurement and Control, Transactions oftheASME, vol. 97, pp. 228-233, 1975.

[32] Ester, M., Kriegel, H.-P., Sander, J., and Xu, X., "ADensity-Based Algorithm for Discovering Clustersin Large Spatial Databases with Noise," Proceedingsof 2nd International Conference on KnowledgeDiscovery and Data Mining (KDD-96), pp. 226-231,1996.

[33] Box, G. E. P., Jenkins, G. M., and Reinsel, G. C.Time Series Analysis: Forecasting and Control,Englewood Cliffs, N.J.: Prentice Hall, 1994.

[34] Clements, M. P. and Hendry, D. F. Forecastingnon-stationary economic time series, Cambridge,Mass. : MIT Press, 1999.

[35] Clements, M. P. and Hendry, D. F.economic time series, Cambridge,Cambridge University Press, 1998.

ForecastingNew York:

[36] Moody, J., Wu, L., Liao, Y., and Saffell, M.,Performance functions and reinforcement learningfor trading systems and portfolios Journal ofForecasting,vol. 17, no.5-6,pp.441-470, 1998.

[37] Brock, W., Lakonishok, J., and LeBaron, B., SimpleTechnical Trading Rules and the StochasticProperties of Stock Returns Journal of Finance,vol. 47, no. 5, pp. 1731-1764, 1992.

[38] Gencay, R., The predictability of security returnswith simple technical trading rules Journal ofEmpirical Finance, vol. 5, no. 4, pp. 347-3 59, Oct,1998.

[39] Plummer, T. and Ridley, A. Forecasting FinancialMarkets: The Psychology of Successful Investing,London: Kogan Page, 2003.

IEEE Transactions on Systems, Man andCybernetics, Part B, vol. 29, no. 6, pp. 859-870,1999.

1039