# Deep Learning Nonlinear Regression

In this article we put to work a perceptron to predict a high difficulty level nonlinear regression problem. The data has been generated using an exponential function with this shape:

The graph above corresponds to the values of the dataset that can be downloaded from the Statistical Reference Dataset of the Information Technology Laboratory of the United States on this link: http://www.itl.nist.gov/div898/strd/nls/data/eckerle4.shtml

Neural networks are especially appropriate to learn patterns and remember shapes. Perceptrons are very basic but yet very powerful neural networks types. Their structure is basically an array of weighted values that is recalculated and balanced iteratively. They can implement activation layers or functions to modify the output within a certain range or list of values.

In order to create the neural network we are going to use Keras, one of the most popular Python libraries. The code is as follows:

The first thing to do is to import the elements that we will use. We will not use aliases for the purpose of clarity:

```# Numeric Python Library.
import numpy
# Python Data Analysis Library.
import pandas
# Scikit-learn Machine Learning Python Library modules.
#   Preprocessing utilities.
from sklearn import preprocessing
#   Cross-validation utilities.
from sklearn import cross_validation
# Python graphical library
from matplotlib import pyplot

# Keras perceptron neuron layer implementation.
from keras.layers import Dense
# Keras Dropout layer implementation.
from keras.layers import Dropout
# Keras Activation Function layer implementation.
from keras.layers import Activation
# Keras Model object.
from keras.models import Sequential```

In the previous code we have imported the numpy and pandas libraries to manage the data structures and perform operations with matrices. The two scikit-learn modules will be used to scale the data and to prepare the test and train data sets.

The matplotlib package will be used to render the graphs.

From Keras, the Sequential model is loaded, it is the structure the Artificial Neural Network model will be built upon. Three types of layers will be used:

1. Dense: Those are the basic layers made with weighted neurons that form the perceptron. An entire perceptron could be built with these type of layers.
2. Activation: Activation functions transform the output data from other layers.
3. Dropout: This is a special type of layer used to avoid over-fitting by leaving out of the learning process a number of neuron.

```# Peraring dataset
# Imports csv into pandas DataFrame object.

# Converts dataframes into numpy objects.
Eckerle4_dataset = Eckerle4_df.values.astype("float32")
# Slicing all rows, second column...
X = Eckerle4_dataset[:,1]
# Slicing all rows, first column...
y = Eckerle4_dataset[:,0]

# Data Scaling from 0 to 1, X and y originally have very different scales.
X_scaler = preprocessing.MinMaxScaler(feature_range=(0, 1))
y_scaler = preprocessing.MinMaxScaler(feature_range=(0, 1))
X_scaled = ( X_scaler.fit_transform(X.reshape(-1,1)))
y_scaled = (y_scaler.fit_transform(y.reshape(-1,1)))

# Preparing test and train data: 60% training, 40% testing.
X_train, X_test, y_train, y_test = cross_validation.train_test_split( \
X_scaled, y_scaled, test_size=0.40, random_state=3)```

The predictor variable is saved in variable X and the dependent variable in y. The two variables have values that differ several orders of magnitude; and the neural networks work better with values next to zero. For those two reasons the variables are scaled to remove their original magnitudes and put them within the same magnitude. Their values are proportionally transformed within 0 and 1.

The data is divided into two sets. One will be used to train the neural network, using 60% of all the samples; and the other will contain 40% of the data, that will be used to test if the model works well with out-of-the-sample data.

Now we are going to define the neural network. It will consist in an input layer to receive the data, several intermediate layers, to process the weights, and a final output layer to return the prediction (regression) results.

The objective is that the network learns from the train data and finally can reproduce the original function with only 60% of the data. It could be less, it could be more; I have chosen 60% randomly. In order to verify that the network has learnt the function, we will ask it to predict which response should return the test data that was not used to create the model.

Now let’s think about the neural network topology. If we study the chart, there are three areas that differ considerably. Those are the left tail, up to the 440 mark, a peak between the 440 and 465 marks approximately, and the second tail on the right, from the 465 mark on. For this reason we will use three neuron intermediate layers, so that the first one learns any of these areas, the second one other area, and the third one the final residuals that should correspond to the third area. We will have therefore 3 layers in our network plus one input and one output layer too. The basic layer structure of the neural network should be similar to this, a sequence of layers, from left to right with this topology:

`INPUT LAYER(2) > [HIDDEN(i)] > [HIDDEN(j)] > [HIDDEN(k)] > OUTPUT(1)`

An input layer that accepts two values X and y, a first intermediate layer that has i neurons, a second hidden layer that has j neurons, an intermediate layer that has k neurons, and finally, an output layer that returns the regression result for each sample X, y.

```# New sequential network structure.
model = Sequential()

# Input layer with dimension 1 and hidden layer i with 128 neurons.
# Dropout of 20% of the neurons and activation layer.
# Hidden layer j with 64 neurons plus activation layer.
# Hidden layer k with 64 neurons.
# Output Layer.

# Model is derived and compiled using mean square error as loss
# function, accuracy as metric and gradient descent optimizer.

# Training model with train data. Fixed random seed:
numpy.random.seed(3)
model.fit(X_train, y_train, nb_epoch=256, batch_size=2, verbose=2)```

Now the model is trained by iterating 256 times through all the train data, taking each time two sampless.

In order to graphically see the accuracy of the model, now we apply the regression model to new data that has not been used to create the model. We will also plot the predicted values versus the actual values.

```~ Predict the response variable with new data
predicted = model.predict(X_test)

# Plot in blue color the predicted adata and in green color the
# actual data to verify visually the accuracy of the model.
pyplot.plot(y_scaler.inverse_transform(predicted), color="blue")
pyplot.plot(y_scaler.inverse_transform(y_test), color="green")
pyplot.show()```

And the produced graph shows that the network has adopted the same shape as the function:

This demonstrates the exceptional power of neural networks to solve complex statistical problems, especially those in which causality is not crucial such as image processing or speech recognition.

## 8 opiniones en “Deep Learning Nonlinear Regression”

1. Gunjan dice:

What changes has to be taken into consideration for high dimensional data (columns/dimension >2000)?

1. gonzalo dice:

Deep neural networks are highly resource-intensive systems. So GPU processing configuration is a must.

For 2K columns, I would suggest you first reduce the number of features or group them into components such as with Principal Component Analysis. Study which is the prediction loss after removing the 1000 less significative features, for instance. What happens after grouping the features into 10 principal components? Does Kernel PCA improve the model?. So, perform extensive feature engineering first.

There are other frameworks that come with pre-trained or pre-configured models such as Caffe on http://caffe.berkeleyvision.org/. You may also take advantage of it.

2. Umberto dice:

Thanks for sharing!!
I’ve learnt that Dense(activation) is equivalent to Activation, is that means you use linear(relu(x)) as your activate function? or it just replace relu by linear?

3. Bruce Milne dice:

This was great. I’ve been looking for some example of using Keras for nonlinear regression and how to shape the model architecture. I have a question though – what are the ‘model.add(Activation(“linear”))’ lines for? You already have layers that use “activation=’relu'”, so why include additional Activation layers?

4. J dice:

Can you explain input dimension =1 in your input layer. I was under the impression that this should be the number of features in your training set.

1. J dice:

I am also confused why you set the activation in the add(Dense… and then in an additional model.add;