Predicting Stock Exchange Prices with Machine Learning

This article will describe how to get an average 75% prediction accuracy in next day’s average price change. The target magnitude is the 2-day simple moving average. The reason is that if we do not apply smoothing to daily prices, the forecasts are much harder to get. The minimum possible smoothing is two days, and that will be the target: altering actual prices as little as possible.

I have selected randomly a company from the New York Stock Exchange, it was “CNH Industrial NV”. No reason for that, it has been a completely random choice among a couple thousand files I have generated extracted from either Yahoo! or Google finance, I do not remember the source. The files are uploaded here:

The method is valid for any financial data as long as it has the same structure. I have also tested it with Forex data getting similar accuracy levels with currencies such as EURUSD, GBPUSD or USDJPY. The interesting point of forecasting those quotes is that by examining where it fails, I think you will improve your price action trading skills and your understanding of the market and what matters.

Data Collection and Variable Configuration

There are millions of possible variable candidates that may seem valid to be analyzed. And which will be the target value we will try to aim? I like thinking that price is like any other object subject to physical laws. It reacts to market forces, it has an inertia, velocity, acceleration, etc.

The forces may be volume, it may have a potential energy depending if it is very high or very low, the rate of change may be important and so on. There are other many factors we could analyze such as gaps, breakouts, technical patterns, candlestick analysis or price distribution within space just to mention a few. For this example we will only be focused on price action and volume.

I have the files saved in csv format to be used with Excel, so let’s start loading the csv file into a DataFrame object using Python.

# Importing all the libraries that we will use.
import pandas as pd
import matplotlib.pyplot as plt
import xgboost as xgb
from sklearn.metrics import accuracy_score
#Load the data from a csv file.
CNHI = {"stock_name":"CNH Industrial NV", "data": pd.read_csv("./data/CNHI_excel.csv",sep="\t",header=0,decimal=',')}
CNHI["data"]=CNHI["data"].drop("Adj Close",1).set_index("Date")

The previous code will, after extracting, remove a column that won’t be used (“Adj Close”) and creating an index using the “Date” column. The date is not a variable we may use for forecasting, so there is no need to keep it as a column of the dataset.

The data now has the typical structure of the financial data: Date, Open, High, Low and Close. The first three rows are shown in the next table:

Date Open High Low Close Volume
2013-09-30 2.75 13.08 12.5 12.5 352800
2013-10-01 12.76 13.16 12.75 12.92 1477900
2013-10-02 13.02 13.08 12.87 12.9 1631900


We are going to omit High, Low and Open, using only Open and Volume for the study. Let’s start preparing the data for the analysis. The predictors (X variables) to be used to predict the target magnitued (y variable) will be the following ones:

  • Two day simple moving average (SMA2). The formula is (Ct – Ct-1)/2, being Ct equal to current day’s open price and Ct-1 to previous day’s open price. This formula is applied to each row of the data set.
Predictors = pd.DataFrame({"sma2":CNHI["data"].Open.rolling(window=2).mean()})
  • 1 day window SMA2. The previous day’s SMA2 value.
Predictors["sma2_1"] = Predictors.sma2.shift(1)

And the other predictors will be:

  • Current day SMA2 increment. (SMA2t – SMA2t-1).
  • 1 day window SMA2 increment. (SMA2t-1 – SMA2t-2).
  • Current day volume increment. (Volt – Volt-1).
  • Current day volume rate of change. (Volt – Volt-1)/Volt
  • 1 day window open price. (Ct-1)
  • Current day open price increment. Ct – Ct-1
  • Current day open price. Ct.
Predictors["sma2_increment"] = Predictors.sma2.diff()  
Predictors["sma2_1_increment"] = Predictors.sma2_1.diff()  
Predictors["vol_increment"] = CNHI["data"].Volume.diff()
Predictors["vol_rel_increment"] = CNHI["data"].Volume.diff() / CNHI["data"].Volume
Predictors["open_1"] = CNHI["data"].Open.shift(1)
Predictors["open_incr"] = CNHI["data"].Open - CNHI["data"].Open.shift(1)
Predictors["open"] = CNHI["data"].Open
# The rows with nulls generated by rolling values will be removed.
Predictors = Predictors.dropna()

A sample of the first 5 rows:

Date sma2 sma2_1 sma2_increment sma2_1_increment vol_increment vol_rel_increment open_1 open_incr open
2013-10-03 12.895 12.89 0.005 0.135 -495500 -0.436026047 13.02 -0.25 12.77
2013-10-04 12.765 12.895 -0.13 0.005 -21800 -0.019558586 12.77 -0.01 12.76
2013-10-07 12.59 12.765 -0.175 -0.13 -400 -0.000359002 12.76 -0.34 12.42
2013-10-08 12.42 12.59 -0.17 -0.175 104600 0.08582212 12.42 0 12.42
2013-10-09 12.5 12.42 0.08 -0.17 -232400 -0.235604217 12.42 0.16 12.58


Target Variable

This will be a classification variable, if the average price will go either up or down the next day.  The target will be forecasting the difference between today’s price and tomorrow’s price (which is unkonwn).

target = pd.DataFrame({"value":Predictors.sma2.shift(-1) - Predictors.sma2}).dropna()

After calculating the data to predict, the three first rows look like this:

Date value
2013-10-03 -0.13
2013-10-04 -0.175
2013-10-07 -0.17

Finally we will match predictors and target values by date and remove those rows without counterpart in the other table.

X = pd.merge(Predictors, target,left_index=True,right_index=True)[Predictors.columns]
y = pd.merge(Predictors, target,left_index=True,right_index=True)[target.columns]

X now contains the predictors and y the target values. The table contains 1,059 records at this moment.

Extreme Gradient Boosting prediction

The extreme gradient boosting is an exceptional machine learning technique for many reasons. It is based on decision trees and it has nice features such as residuals analysis, non-linear regression, feature selection tools, overfitting avoidance and many other more. Other machine learning alternative techniques commonly used for this type of analysis are Support Vector Machines, Neural Networks and Random Forest. I have used all of those for predicting market prices and the Extreme Gradient Boosting is always my first choice.

We will setup the regression model using the 65% of the data and with that model, the next 35% of the data will be used to predict future values. This simulates the actual scenario in which we have past data to train our model and we want to predict how a future datum will be with the data we currently have on hand. The data will be split in two sets: the training set to preconfigure the model and the testing set that won’t be used to build the model, but only to test if it works as expected with new data.

train_samples = int(X.shape[0] * 0.65)
X_train = X.iloc[:train_samples]
X_test = X.iloc[train_samples:]
y_train = y.iloc[:train_samples]
y_test = y.iloc[train_samples:]

After applying the data splitting, the test data set contains:

  • Train records: 688.
  • Test records: 371.

The target variables will be transformed for binary classification. A positive change in the value of prices will be classified as 1 and a non-positive change as 0.

def getBinary(val):
    if val>0:
        return 1
        return 0
# and the transformation is applied on the test data for later use.
# The train data will be transformed while it is being fit.
y_test_binary = pd.DataFrame(y_test["value"].apply(getBinary)

And next, the model is trained and the test data predicted to verify the accuracy of the system:

regressor = xgb.XGBRegressor(gamma=0.0,n_estimators=150,base_score=0.7,colsample_bytree=1,learning_rate=0.01)
xgbModel =,y_train.value.apply(getBinary))
y_predicted = xgbModel.predict(X_test)
y_predicted_binary = [1 if yp >=0.5 else 0 for yp in y_predicted]
print (accuracy_score(y_test_binary,y_predicted_binary))
Out: 0.76010781671159033

So, the initial accuracy without optimizing the model is 76% predicting the daily average price change for each of the the next 371 trading days.

The model can be optimized, I have just used a few parameters to avoid overfitting with the training data and adjusting the learning rate.

The features used should also be analyzed to avoid using redundant variables and to discard those with no correlation. New features should be added to try improved approaches and, to sum up, there is a lot of work that could be done around this basic model.

XGBOOST has also ways to study features. Let’s take a look at their importance:

fig = plt.figure(figsize=(8,8))
plt.xticks(rotation='vertical')[i for i in range(len(xgbModel.feature_importances_))], xgbModel.feature_importances_.tolist(), tick_label=X_test.columns, color="chocolate")

It is obvious that the field extension is huge and especially interesting.

Deep Learning Nonlinear Regression

In this article we put to work a perceptron to predict a high difficulty level nonlinear regression problem. The data has been generated using an exponential function with this shape:


The graph above corresponds to the values of the dataset that can be downloaded from the Statistical Reference Dataset of the Information Technology Laboratory of the United States on this link:

Neural networks are especially appropriate to learn patterns and remember shapes. Perceptrons are very basic but yet very powerful neural networks types. Their structure is basically an array of weighted values that is recalculated and balanced iteratively. They can implement activation layers or functions to modify the output within a certain range or list of values.

In order to create the neural network we are going to use Keras, one of the most popular Python libraries. The code is as follows:

The first thing to do is to import the elements that we will use. We will not use aliases for the purpose of clarity:

# Numeric Python Library.
import numpy
# Python Data Analysis Library.
import pandas
# Scikit-learn Machine Learning Python Library modules.
#   Preprocessing utilities.
from sklearn import preprocessing
#   Cross-validation utilities.
from sklearn import cross_validation
# Python graphical library
from matplotlib import pyplot
# Keras perceptron neuron layer implementation.
from keras.layers import Dense
# Keras Dropout layer implementation.
from keras.layers import Dropout
# Keras Activation Function layer implementation.
from keras.layers import Activation
# Keras Model object.
from keras.models import Sequential

In the previous code we have imported the numpy and pandas libraries to manage the data structures and perform operations with matrices. The two scikit-learn modules will be used to scale the data and to prepare the test and train data sets.

The matplotlib package will be used to render the graphs.

From Keras, the Sequential model is loaded, it is the structure the Artificial Neural Network model will be built upon. Three types of layers will be used:

  1. Dense: Those are the basic layers made with weighted neurons that form the perceptron. An entire perceptron could be built with these type of layers.
  2. Activation: Activation functions transform the output data from other layers.
  3. Dropout: This is a special type of layer used to avoid over-fitting by leaving out of the learning process a number of neuron.

First we load the dataset already formatted as csv.

# Peraring dataset
# Imports csv into pandas DataFrame object.
Eckerle4_df = pandas.read_csv("Eckerle4.csv", header=0)
# Converts dataframes into numpy objects.
Eckerle4_dataset = Eckerle4_df.values.astype("float32")
# Slicing all rows, second column...
X = Eckerle4_dataset[:,1]
# Slicing all rows, first column...
y = Eckerle4_dataset[:,0]
# Data Scaling from 0 to 1, X and y originally have very different scales.
X_scaler = preprocessing.MinMaxScaler(feature_range=(0, 1))
y_scaler = preprocessing.MinMaxScaler(feature_range=(0, 1))
X_scaled = ( X_scaler.fit_transform(X.reshape(-1,1)))
y_scaled = (y_scaler.fit_transform(y.reshape(-1,1)))
# Preparing test and train data: 60% training, 40% testing.
X_train, X_test, y_train, y_test = cross_validation.train_test_split( \
    X_scaled, y_scaled, test_size=0.40, random_state=3)

The predictor variable is saved in variable X and the dependent variable in y. The two variables have values that differ several orders of magnitude; and the neural networks work better with values next to zero. For those two reasons the variables are scaled to remove their original magnitudes and put them within the same magnitude. Their values are proportionally transformed within 0 and 1.

The data is divided into two sets. One will be used to train the neural network, using 60% of all the samples; and the other will contain 40% of the data, that will be used to test if the model works well with out-of-the-sample data.

Now we are going to define the neural network. It will consist in an input layer to receive the data, several intermediate layers, to process the weights, and a final output layer to return the prediction (regression) results.

The objective is that the network learns from the train data and finally can reproduce the original function with only 60% of the data. It could be less, it could be more; I have chosen 60% randomly. In order to verify that the network has learnt the function, we will ask it to predict which response should return the test data that was not used to create the model.

Now let’s think about the neural network topology. If we study the chart, there are three areas that differ considerably. Those are the left tail, up to the 440 mark, a peak between the 440 and 465 marks approximately, and the second tail on the right, from the 465 mark on. For this reason we will use three neuron intermediate layers, so that the first one learns any of these areas, the second one other area, and the third one the final residuals that should correspond to the third area. We will have therefore 3 layers in our network plus one input and one output layer too. The basic layer structure of the neural network should be similar to this, a sequence of layers, from left to right with this topology:

INPUT LAYER(2) > [HIDDEN(i)] > [HIDDEN(j)] > [HIDDEN(k)] > OUTPUT(1)

An input layer that accepts two values X and y, a first intermediate layer that has i neurons, a second hidden layer that has j neurons, an intermediate layer that has k neurons, and finally, an output layer that returns the regression result for each sample X, y.

# New sequential network structure.
model = Sequential()
# Input layer with dimension 1 and hidden layer i with 128 neurons. 
model.add(Dense(128, input_dim=1, activation='relu'))
# Dropout of 20% of the neurons and activation layer.
# Hidden layer j with 64 neurons plus activation layer.
model.add(Dense(64, activation='relu'))
# Hidden layer k with 64 neurons.
model.add(Dense(64, activation='relu'))
# Output Layer.
# Model is derived and compiled using mean square error as loss
# function, accuracy as metric and gradient descent optimizer.
model.compile(loss='mse', optimizer='adam', metrics=["accuracy"])
# Training model with train data. Fixed random seed:
numpy.random.seed(3), y_train, nb_epoch=256, batch_size=2, verbose=2)

Now the model is trained by iterating 256 times through all the train data, taking each time two sampless.

In order to graphically see the accuracy of the model, now we apply the regression model to new data that has not been used to create the model. We will also plot the predicted values versus the actual values.

~ Predict the response variable with new data
predicted = model.predict(X_test)
# Plot in blue color the predicted adata and in green color the
# actual data to verify visually the accuracy of the model.
pyplot.plot(y_scaler.inverse_transform(predicted), color="blue")
pyplot.plot(y_scaler.inverse_transform(y_test), color="green")

And the produced graph shows that the network has adopted the same shape as the function:


This demonstrates the exceptional power of neural networks to solve complex statistical problems, especially those in which causality is not crucial such as image processing or speech recognition.

Python for Digital Signal Processing

Before starting with this post, you may want to learn the basics of Python. If you’re an experienced programmer and head Python for the first time, you will likely find it very easy to understand. One important thing about Python: Python requires perfect indentation (4 spaces) to validate the code. So if you get an error and you code seems perfect, review if you have indented correctly each line. Python has also a particular way to deal with arrays, more close the the R programming language than to C-like style.

Python’s core functionality is extended with thousands of available free libraries, many of them are incredibly handy. Even if Python is not a compiled language, many of its libraries are written in C, being Python a wrapper for them.

The libraries used on this article are:

  • scipy – Scientific library for Python.
  • numpy – Numeric library for Python.

To load a wav file in Python:

# Loads the package for later usage of io.wavefile module.
from scipy import io
# Location of the wav file in the file system.
fileName = '/Downloads/Music_Genres/genres/country/country.00053.wav'
# Loads sample rate (bps) and signal data (wav). Notice that Python
# functions can return multiple variables at the same time.
sample_rate, data =
# Print in sdout the sample rate, number of items. 
#and duration in seconds of the wav file. 
print "Sample rate:{0}, data size:{1}, duration:{2} seconds" \

The output generated should seem like:

Sample rate:22050, data size:(661794L,), duration:30 seconds

The output shows that the wav file contains in all 661,794 samples (the data variable is an array with 661,794 elements). The sample rate is 22,050 samples per second. Dividing 661,794 elements by 22,050 samples per second, we obtain 30 seconds, the length in seconds of the wav file.

The Fourier Transform

The Fourier transform is the method that we will use to extract the prevalent frequencies from the wav file. Each frequency corresponds to a musical tone; knowing the frequencies from a particular time interval we are able to know which are the most frequent tones within that interval, being possible to infer the key and chords played during that time lapse.

This article is not going to enter into the details of the Fourier transform, only on how to use it to extract information regarding the frequency power from the wav signal analyzed. The video below is an intuitive introduction to the Fourier transform in case the reader is interested on it. It also includes examples of how to implement it algorithmically. It is quite advisable to watch it once now and then come back again to review it after the training in Fourier transform is completed.

Basically, given a signal, a wav file on this post, which is composed by a number n of samples \(x[n]\). We can get the frequency power within the signal with the FFT (Fast Fourier Transform) function. The FFT function is an improvement that optimizes the Fourier transform.

The FFT function receives two arguments, the signar \(x\) and the number of items to retrieve \(k, k\leq n\). The commonly choosen k value is \(\frac{n}{2}\) because the FFT result, \(fft[k]\) is usually symmetric around that length. This means that in order to calculate the FFT, only a half of the total signal length is required to retrieve the different frequencies occurrence. So, in plain words, if the original signal file has 100 samples, only 50 samples are needed to process the complete FFT transform.

In Python language there are two useful functions to calculate and get the Fourier transform from a sample array, like the one where the data variable from the wav file is stored:

  • fftfreq – Returns the frequency corresponding to each \(x_i\) sample from the signal data sample file \(x[n]\) corresponding to the power of the fourier transform. This is the frequency to which each fft element corresponds to.
  • fft – Returns the fourier transform data from the sample file. The position of the elements returned correspond to the position of the fftfreq, so that using both arrays the fft power elements correspond by position to the fftfreq frequencies.

For instance, if the fourier transform function returns fft = {0,0.5,1} and \(\)fftfreq = {100,200,300}\(\), it means that the signal has a power of 0 for frequency 100Hz, a power of 0.5 for 200Hz and a power of 1 within 300Hz; being 300Hz the frequency most frequent.

The following code would extract from a wav file the first 10 second, apply the fourier transform  and the frequencies associated to each item within  the spectral data.

# Package that implements the fast
# fourier transform functions.
from scipy import fftpack
import numpy as np
# Loads wav file as array.
fileName = './country.00053.wav'
sample_rate, data =
# Extracting 10 seconds. N is the numbers of samples to
# extract or elements from the array.
seconds_to_extract = 10
N = seconds_to_extract * sample_rate
# Knowing N and sample rate, fftfreq gets the frequency
# Associated to each FFT unit.
f = fftpack.fftfreq(N, 1.0/sample_rate)
# Extracts fourier transform data from the sample
# returning the spectrum analysis
subdata = data[:N]
F = fftpack.fft(subdata)

F contains the power and f the frequency each item within F is related to. The higher the power, the higher the frequency prevalence across the signal. Filtering the frequencies using the f matrix and extracting the power we could get a graph like the next one:fourier_transform_python

On the y-axis, \(|F|\) is the absolute value of each unit from F and the values of f are the Frequency (Hz) on the x-axis. The green and orange lines can be ignored. To get the subset of frequencies [200-900] displayed on the chart, the next code was used:

# Interval limits
Lower_freq = 200
Upper_freq = 900
# f (frequencies) between lower frequency AND
# f (frequencies) upper frequencies.
filter_subset = (f >= Lower_freq) * (f <= Upper_freq)
# Extracts filtered items from the frequency list.
f_subset = f[filter_subset]
# Extracts filtered items from the Fourier transform power list.
F_subset = F[filter_subset]