My blog has moved! Redirecting...

You should be automatically redirected. If not, visit http://www.dataminingblog.com and update your bookmarks.

Data Mining Research - dataminingblog.com: Stock Prediction using Decision Tree: Classification Tree

I'm a Data Miner Collection (T-shirts, Mugs & Mousepads)

All benefits are given to a charity association.

Monday, October 27, 2008

Stock Prediction using Decision Tree: Classification Tree

This is the fourth post in a series on using Decision Tree for Stock Prediction. For more information, feel free to read post 1, post 2 and post 3 of the series.

Once the data have been preprocessed, we obtain a matrix in which each row is a different day (since we work with daily data) and each column is one of the possible variable (close, volume, technical indicators, combination of some indicators, etc.). The reason why I started with decision tree instead of more "trendy" neural networks or support vector machines is because I prefer to begin with simple methods and then, if necessary, change to a more complex one.

One big advantage with decision tree is that one can understand the model by seeing it (i.e. by looking at the tree). It is very appreciable to understand why, at a given day, MSFT (ticker name for Microsoft) has been predicted to increase or decrease. However, this readability is only applicable as a pre-study in the project. Indeed, since the project is based on making one prediction a day (during all the backtesting period) for each selected stock, there are too many different models for a Human being to understand them.

Thus, the high number of models is due to the following processes which have to be done:

For each year to backtest
  For each open day in the year
    For each stock that has been selected
      For each hyper-parameter value of the tree
        For each fold of the cross-validation
          Build a decision and evaluate it


If we consider that building a decision tree takes 1 second, then, for a backtest on 100 stocks from 2001 to 2008, we need:

8 * 252 * 100 * (10*10) * 10 = 201'600'000 seconds

This means more than 6 years of computation on a 4 CPU computer. At this stage, there are mainly two possibilities:

  • Grid computing
  • Computing the trees each month instead of each day
By applying these two ideas, it is possible to bring the processing time to around 3 hours of calculation (with a 6 x 4 CPU grid of computers). The next post of the series will discuss the risk management of the system.

Sphere: Related Content

5 comments:

Anonymous said...

Hi Sandro,
Could you elaborate on the hyper-parameter step please?
Shane

Sandro Saitta said...

Shane,
Thanks for asking the question. For setting the hyper-parameter of the decision tree (i.e. model selection step), I use a simple grid search.

The two parameters I fix are the deepness of the tree and the minimum number of element in a node for a split to occur. I thus try every possible combination of them. They both have a range of possible values.

I hope it is clearer now. Feel free to ask for more details.

Anonymous said...

Thanks Sandro, that makes perfect sense! Cheers, Shane

shailesh bohra said...

Hi Sandro ,


i am also doing research work on data mining applications in stock market so i am looking for a relevant dataset so i need your help regarding the data set so what dataset you are taking ..can you please help me in getting any relevant dataset for any stock market..

thanks & cheers,
shailesh

Sandro Saitta said...

Hello shailesh,

I'm using data from our own internal database. We get data from Bloomberg.

If you want free data, I think you can use Google Finance and Yahoo! Finance

Hope it helps!

Also feel free to keep in touch with me since our work may be very close.

 
Clicky Web Analytics