From Logistic Regression in SciKit-Learn to Deep Learning with TensorFlow – A fraud detection case study – Part III

After a series of posts about exotic option pricing (Asian, Barriers and Bermudans) with TensorFlow and finding optimal hedging strategies with deep learning (using a LSTM network to learn a delta hedge) I will come back to our credit card fraud detection case. In the previous part we have build a logistic regression classifier in TensorFlow to detect fraudulent transactions. We will see that our logistic regression classifier is equivalent to a very simple neural network with exactly one layer with one node and sigmoid activation function. We will extend this simple network to to a deep neural network by adding more hidden layers. We will use the low level API of TensorFlow to build the networks. At the end of the post we will use Keras’s high level API to build a same network with just a few lines of code.

We will continue to use the same data apply the same transformation which we are using since the first part of this series.

As usual you can find the notebook on my GitHub repository.

Deep learning / neural networks in a nutshell

An artificial neural network (ANN) is collection of connected nodes. In the first layer of the network the input of our nodes are the input features. In following layers the output of previous nodes are the input to the nodes in the current layer. If we have more than 1 hidden layer we can call the network a deep neural network.

neural-network

The picture is generated by a latex script written by Kjell Magne Fauske (http://www.texample.net/tikz/examples/neural-network/) released under Creative common license. Thanks for that.

The output of the node is the composition of the dot or (scalar) product of a weights vector and the input vector and an activation function. Let be X the vector of input features and w_i the weights vector of the node i, then the output of this node is given by

output_i =  \phi(X^Tw_i+ b_i),

with an activation function \phi and bias b_i.

If a layer consists more of one node the layer can be represented as a matrix multiplication. Such a layer is often called linear or dense layer. Typical choices for activation functions are tanh, relu, sigmoid function.

activation_functions

As we can see from this formula a dense layer with one node and sigmoid function as activation is our logisitc regression model. The matrix product will be the logit and the output of the activation function will be the probability as in a logistic regression model.

Lets review the logistic regression example in a neural network setting, lets start a function which constructs the computational graph for a dense (linear) layer given a input, activation function and number of nodes.

def add_layer(X, n_features, n_nodes, activation=None):
    """
    Build a dense layer with n_features-dimensional input X and n_nodes (output dimensional).

    Parameters:

        X : 2D Input Tensor (n_samples, n_features)
        n_features = number of features in the tensor 
        n_nodes = number of nodes in layer (output dimension)
        activation = None or callable activation function 

    Output:

        Operator which returns a 2D Tensor (n_samples, n_nodes)

    """
    weights = tf.Variable(initial_value=tf.random_normal((n_features,n_nodes), 0, 0.1, seed=42), dtype=tf.float32)
    bias = tf.Variable(initial_value=tf.random_normal((1,n_nodes), 0, 0.1, seed=42), dtype=tf.float32)
    layer = tf.add(tf.matmul(X, weights), bias)
    if activation is None:
        return layer
    else:
        return activation(layer)

We wrapping our training and prediction functions in a class. The constructor of this class builds the computational graph in TensorFlow. The the function create_logit will build the computational graph to compute the logits (in the logisitc regression case: one layer with one node and the identity as activation function). We will override this function at a later point to add more layers to our network.

class model(object):
    def __init__(self, n_features, output_every_n_epochs=1, name='model'):
        self.input = tf.placeholder(tf.float32, shape=(None, n_features))
        self.true_values = tf.placeholder(tf.float32, shape=(None,1))
        self.training = tf.placeholder(tf.bool)
        self.logit = self.create_logit()
        self.loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=self.true_values, 
                                                                           logits=self.logit))
        self.predicted_probs = tf.sigmoid(self.logit)
        self.output_every_n_epochs = output_every_n_epochs
        self.name = name
        self.saver = tf.train.Saver()    



    def create_logit(self):
        return  add_layer(self.input, 30, 1)

    def evaluate_loss_and_probs(self, sess, X, y, training=False, output=False):
        loss, probs = sess.run([self.loss, self.predicted_probs], {self.input : X, 
                                                                   self.true_values : y.reshape(-1,1),
                                                                   self.training : training})
        probs.reshape(-1)
        y_hat = (probs > 0.5).reshape(-1)*1
        auc = roc_auc_score(y, probs)
        precision = precision_score(y, y_hat)
        recall = recall_score(y, y_hat)
        fp = np.sum((y!=y_hat) & (y==0)) 
        fpr = fp / (y==0).sum()
        if output:
            print('Loss: %.6f \t AUC %.6f \t Precision %.6f%% \t Recall %.6f%% \t FPR %.6f%%' % (loss, auc, precision*100, recall*100, fpr*100))
        return loss, probs, y_hat, auc, precision, recall



    def train(self, sess, X, y, n_epochs, batch_size, learning_rate):
        init = tf.global_variables_initializer()
        sess.run(init)
        optimizer = tf.train.GradientDescentOptimizer(learning_rate)
        train = optimizer.minimize(self.loss)
        n_samples = X.shape[0]
        n_iter = int(np.ceil(n_samples/batch_size))
        indices = np.arange(n_samples)
        training_losses = []
        training_aucs = []
        for epoch in range(0,n_epochs):
            np.random.shuffle(indices)
            for i in range(n_iter):
                idx = indices[i*batch_size:(i+1)*batch_size]
                x_i = X[idx,:]
                y_i = y[idx].reshape(-1,1)
                sess.run(train, {self.input : x_i, 
                                 self.true_values : y_i,
                                 self.training : True})
            output=False
            if (epoch % self.output_every_n_epochs)==0:
                print(epoch, 'th Epoch')
                output=True
            loss_train_epoch, predict_train_epoch, y_hat, auc_train_epoch, _, _ = self.evaluate_loss_and_probs(sess, X, y, False, output)
            training_losses.append(loss_train_epoch)
            training_aucs.append(auc_train_epoch)
        with plt.xkcd() as style:
            plt.figure(figsize=(7,7))
            plt.subplot(2,1,1)  
            plt.title('Loss')
            plt.plot(range(n_epochs), training_losses)
            plt.xlabel('# Epoch')
            plt.subplot(2,1,2)
            plt.title('AUC')
            plt.plot(range(n_epochs), training_aucs)
            plt.xlabel('# Epoch')
            plt.tight_layout()
            plt.savefig('training_loss_auc_%s.png' % self.name, dpi=300)
        self.saver.save(sess, "./%s/model.ckpt" % self.name)

    def restore(self, sess):
        self.saver.restore(sess, "./%s/model.ckpt" % self.name)

Apply this function to build our Logisitc Regression model

np.random.seed(42)
lr = model(30, 10, 'lr')
n_epochs = 11
batch_size = 100
with tf.Session() as sess:
    lr.train(sess, X_train, y_train, n_epochs, batch_size, 0.1)
    print('Validation set:')
    _, probs_lr, y_hat_lr, _, _, _ = lr.evaluate_loss_and_probs(sess, X_valid, y_valid, False, True)

training_loss_auc_lr

0 th Epoch
Loss: 0.007944   AUC 0.980217    Precision 86.538462%    Recall 56.675063%   FPR 0.015388%
10 th Epoch
Loss: 0.004231   AUC 0.984984    Precision 87.591241%    Recall 60.453401%   FPR 0.014948%
Validation set:
Loss: 0.003721   AUC 0.977169    Precision 89.041096%    Recall 68.421053%   FPR 0.014068%

Backpropagation

In the previous parts we have seen how we can learn the weights (parameter) of our logistic regression model. So we know how to train a network with one layer but how can we train a network with more than one layer?

The concept is called Backpropagation and is basically the application of the chain rule. In the first phase (feed forward phase) the the input is feed into the network through all layers and the loss is calculated. Then in the 2nd or backward phase, the weights are updated recursevly from the last layer to the first.

At the last layer the derivate of the loss is straight forward. For the calculation of the weights in the inner or hidden layers we need the previous calculated derivates.

With the calculated gradients we can apply again a gradient descent method to optimize our weights.

The power of TensorFlow or other deep learning libraries as PyTorch are again the auto gradients. We dont need to worry to calculate the gradients by ourself.

A detailed deriviation of the backpropagation algorithm with an example for a quadratic loss function can be found on wikipedia.

First deep network

Now its time for our first deep neural network. We will add 4 layers with 120, 60, 30 and 1 node.

class model2(model):

    def create_logit(self):
        layer1 = add_layer(self.input, 30, 120)
        layer2 = add_layer(layer1, 120, 60,)
        layer3 = add_layer(layer2, 60, 30)
        layer4 = add_layer(layer3, 30, 1)
        return layer4

np.random.seed(42)
dnn1 = model2(30, 10, 'model1')
n_epochs = 11
batch_size = 100
with tf.Session() as sess:
    dnn1.train(sess, X_train, y_train, n_epochs, batch_size, 0.1)
    print('Validation set')
    _, probs_dnn1, y_hat_dnn1, _, _, _ = dnn1.evaluate_loss_and_probs(sess, X_valid, y_valid, False, True)

The performance of this network is not really good. Actually is quite bad for the complexity of the model.
The AUC on the validation set is worse than the AUC from the logistic regression.

roc_auc_model1

roc_auc_model1_detail

For low FPRs the logistic regession almost always outperforms the deep neural network (DNN). A FPR of 0.1 % means that in we will have 1 false positive in 1000 transactions. If you have millions of transactions even such a low fpr can affect and your customers. In very low FPRs (less than 0.0001) the DNN have a slightly higher true positive rate (TPR).

The problem is that we use the identity as activation function. The logit is still a linear function of the input.
If we want to capture non linear dependencies we have to add a non-linear activation function.
Let’s try the RELU.

Time for non-linearity

class model2b(model):

    def create_logit(self):
        layer1 = add_layer(self.input, 30, 120, tf.nn.relu)
        layer2 = add_layer(layer1, 120, 60, tf.nn.relu)
        layer3 = add_layer(layer2, 60, 30, tf.nn.relu)
        layer4 = add_layer(layer3, 30, 1)
        return layer4

np.random.seed(42)
dnn1b = model2b(30, 10, 'model1b')
n_epochs = 31
batch_size = 100
with tf.Session() as sess:
    dnn1b.train(sess, X_train, y_train, n_epochs, batch_size, 0.1)
    print('Validation set')
    _, probs_dnn1b, y_hat_dnn1b, _, _, _= dnn1b.evaluate_loss_and_probs(sess, X_valid, y_valid, False, True)

Another popular choice is tanh. We compare both activation functions with the logistic regression:

roc_auc_model1c

roc_auc_model1c_detail

We see that both non linear models outperforms the logistic regression. For low FPRs the TPR is signifanct higher.
Assume we would accept a FPR of 0.01 %, then the Recall of our DNN is around 80% vs 50% for the logistic regression.
We can detect much more fraudulent transactions with the same rate of false alarms.

Using TensorFlow layers

Instead of building the computational graph our self (weights, bias tensor, etc) we can use TensorFlow Layers. The function tf.layers.dense build a linear or dense layer. We can specify the number of nodes, the input and the actication function (similar to our own function).

In the next layer we use the TensorFlow function and add on more layers.

class model3(model):

    def create_logit(self):
        layer1 = tf.layers.dense(self.input, 240, activation=tf.nn.tanh)
        layer2 = tf.layers.dense(layer1, 120, activation=tf.nn.tanh)
        layer3 = tf.layers.dense(layer2, 60, activation=tf.nn.tanh)
        layer4 = tf.layers.dense(layer3, 30, activation=tf.nn.tanh)
        layer5 = tf.layers.dense(layer4, 1)
        return layer5

np.random.seed(42)
dnn2 = model3(30, 10, 'model2')
n_epochs = 31
batch_size = 100
with tf.Session() as sess:
    dnn2.train(sess, X_train, y_train, n_epochs, batch_size, 0.1)
    print('Validation set')
    _, probs_dnn2, y_hat_dnn2, _, _, _= dnn2.evaluate_loss_and_probs(sess, X_valid, y_valid, False, True)
0 th Epoch
Loss: 0.003000   AUC 0.986239    Precision 82.428941%    Recall 80.352645%   FPR 0.029897%
10 th Epoch
Loss: 0.002036   AUC 0.992393    Precision 95.626822%    Recall 82.619647%   FPR 0.006595%
20 th Epoch
Loss: 0.001598   AUC 0.995232    Precision 93.989071%    Recall 86.649874%   FPR 0.009673%
30 th Epoch
Loss: 0.001273   AUC 0.996695    Precision 99.137931%    Recall 86.901763%   FPR 0.001319%
Validation set
Loss: 0.002425   AUC 0.980571    Precision 91.764706%    Recall 82.105263%   FPR 0.012309%

The model didn’t improve to the previous one. Maybe we are now overfitting. One way to prevent overfitting in DNN are dropouts. Dropouts deactive a proportion of nodes during training randomnly. So we prevent our neural network to memorize the training data. Lets add dropout layers to the previous model.

Lets use a dropout rate of 20%.

class model4(model):

    def create_logit(self):
        layer1 = tf.layers.dense(self.input, 120, activation=tf.nn.tanh)
        layer1 = tf.layers.dropout(layer1, 0.2, training=self.training)
        layer2 = tf.layers.dense(layer1, 60, activation=tf.nn.tanh)
        layer2 = tf.layers.dropout(layer2, 0.2, training=self.training)
        layer3 = tf.layers.dense(layer2, 30, activation=tf.nn.tanh)
        layer3 = tf.layers.dropout(layer3, 0.2, training=self.training)
        layer4 = tf.layers.dense(layer3, 1)
        return layer4

np.random.seed(42)
dnn3 = model4(30, 10, 'model3')
n_epochs = 31
batch_size = 100
with tf.Session() as sess:
    dnn3.train(sess, X_train, y_train, n_epochs, batch_size, 0.1)
    print('Validation set')
    _, probs_dnn3, y_hat_dnn3, _, _, _= dnn3.evaluate_loss_and_probs(sess, X_valid, y_valid, False, True)

auc_models_2

auc_models_2_detail

We see that all our deep learning model outperform the LR model on the validation set. The difference in AUC doesn’t seems very big, but especially for very low FPR the recall is much higher. Where the model with the dropout (DNN3) performs slightly better than the others.

Lets go for model 3 (4 layers with dropout) and let see the AUC Score of the model on the test data.

with tf.Session() as sess:
    dnn3.restore(sess)
    print('Test set')
    _, probs_dnn3_test, y_hat_dnn3_test, _, _, _= dnn3.evaluate_loss_and_probs(sess, X_test, y_test, False, True)
Test set
Loss: 0.001825   AUC 0.991294    Precision 97.619048%    Recall 83.673469%   FPR 0.003517%

This model performs very well on our test set. We have a high Recall with a very low FPR at a threshold of 50%.

Keras

The library Keras offers a very convinient API to TensorFlow (but it also supports other deep learning backends).

We can build the same model in just 6 lines of code. For many standard problems there are predefined loss functions, but we can also write our own loss functions in Keras.

For the model training and the prediction we only need one line of code each.

model = keras.Sequential()
model.add(keras.layers.Dense(120, input_shape=(30,), activation='tanh'))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(60, activation='tanh'))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(30, activation='tanh'))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(1, activation='sigmoid'))

model.compile(optimizer='adam', loss='binary_crossentropy')

model.fit(X_train, y_train, epochs=31, batch_size=100)

probs_keras = model.predict(X_test)

Conclusion

In this part we saw how to build and train a deep neural network with TensorFlow using the low level and mid level API and as an outlook we saw how easy the model development is in a high level API like Keras.

For this fraud detection problem a very simple deep network can outperform a classical machine learning algorithm like logistic regression if we looking into the low false positive rate (FPR) regions. If we can accept higher false positive rates all models perform similar.

To decide for a final model, one need to specify the costs of a FP (a genuine transaction which we maybe block or at least investigate) and FN (a fraudulent transaction which we miss), so we can balance the trade-off between Recall (detection power) and FPR.

Our fraud detection problem is as we know a imbalance class problem. We can maybe improve the quality of the logistic regression with use of over-/undersampling of the majority class. Or we can use try other ‘classical’ machine learning methods like random forests or boosting trees, which often outperform a logisitc regression.

Another interesting unsupervised deep learning method to detect anomalies in transactions are auto-encoders.

I think I will cover these topics in later posts. So stay tuned.

As usual you can find the notebook on my GitHub repo, so please download the notebook and play with the model parameter, e.g one could change numbers of epochs we train, apply adaptive learning rates or add more layer or change the numbers of nodes in each layer and play with dropouts to find better models and please share your ideas and results.

So long…

From Logistic Regression in SciKit-Learn to Deep Learning with TensorFlow – A fraud detection case study – Part II

We will continue to build our credit card fraud detection model. In the previous post we used scikit-learn to detect fraudulent transactions with a logistic regression model. This time we will build a logistic regression in TensorFlow from scratch. We will start with some TensorFlow basics and then see how to minimize a loss function with (stochastic) gradient descent.

We will fit our model to our training set by minimizing the cross entropy. For the logistic regression is minimizing the cross entropy aquivalent to maximizing the likelihood (see previous part for details). In the next part we will extend this model with some hidden layer and build a deep neural network to detect fraud. And we will see how to use the High-Level API to build the same model much easier and quicker.

As usual you will find the corresponding notebook also on GitHub and kaggle.

First we will load the data, split it into a training and test set and normalize the features as in our previous model.


import numpy as np
import pandas as pd
import tensorflow as tf
import keras
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_curve, roc_auc_score, classification_report, accuracy_score
import seaborn as sns
import matplotlib.pyplot as plt

credit_card = pd.read_csv('../input/creditcard.csv')

X = credit_card.drop(columns='Class', axis=1)
y = credit_card.Class.values

np.random.seed(42)
X_train, X_test, y_train, y_test = train_test_split(X, y)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

Short TensorFlow introduction

TensorFlow uses a dataflow graph to represent the computation in terms of the dependencies between individual operations.

We first define the dataflow graph, then we create a TensorFlow session to run or calculate it.

Start with a very simple graph. Lets say we have a two dimensional vector

x =(x_1,x_2)

and want to compute a x^2 = (ax_1^2, ax_2^2) , where a is a constant (e.g.5).

First we define the input, a tf.placeholder object. We will use tf.placeholder to feed our data to our models (like our features and the value we try to predict). We can also use tf.placeholder to feed hyperparameter in our model (e.g. a learning_rate). We have to define the type of the input (in this case it’s a floating point number) and the shape of the input (a two dimensional vector).

Another class is the tf.constant, as the name indicates it represent a constant in our computational graph.

input_x = tf.placeholder(tf.float32)
a = tf.constant(5., tf.float32, name='a', shape=(2,))

Now we define our operator y.

y = a * tf.pow(input_x,2)

We can create a TensorFlow session and run our operator with the ´run()´ method of the session. We have to feed our input vector x as a dictionary into the graph. Lets say x=(1,2)

x = np.array([1,2])
with tf.Session() as sess:
    result = sess.run(y, {input_x: x})
    print(result)
[ 5. 20.]

Automatic differentiation in TensorFlow

Most machine learning and deep learning problems are at the end minimization problems. We either have to minimize a loss function or we have to maximize a reward function in the case of reinforced learning. A bread and butter method to solve these kind of problems is the gradient descent (ascent) method.

The power of TensorFlow is the automatic or algorithmic differentiation of our computational graph. We can get the anayltical derviates (or gradients) for almost ‘free’. We dont need to derive the formula for the gradient by ourself or implement it.

The gradient \nabla f(x) is the multidimensional generalization of the derivate of a function. Its the vector of the partial derivates of the function and points into the direction with the strongest increase. If we have have real valued function f(x) with x being a n-dimensional vector, then f is decreasing the fastest when we go from point x into the direction of the negative gradient.

To get the gradient we use the tf.gradientclass. Lets say we want to derive y with respect to our input x. The function call looks like that:

g = tf.gradients(y, [input_x])
grad_y_x = 0
with tf.Session() as sess:
    grad_y_x = sess.run(g,{input_x: np.array([1,2])})
    print(grad_y_x)
[array([10., 20.], dtype=float32)]

Chain Rule in TensorFlow

Lets check a simple example for the chain rule. We come later back to the chain rule when we talk about back propagation in the coming part. Back propagation is one of the key concepts in deep learning. Lets recall the chain rule, which we all learned at high school:

\frac{d}{dx}f(g(x)) = f'(g(x))g'(x)

For our example we use our function y and chain it to a new function z=log(y).

The ‘inner’ partial derivate of y with respect to x_i is

\frac{\partial y}{\partial x_i} = 10x_i

and the outer one with respect to y is

\frac{\partial z}{\partial y_i} =\frac{1}{5x_i^2} .

The partial derivate is \frac{2x_i }{x_i^2} .

With TensorFlow, we can calulate the outer and inner derivate seperatly or in one step.

In our example we will calculate two gradients one with respect to y and one with respect to x. Multiplying elementwise the gradient with respect to y with the gradient of y with respect to x (inner derivative) yield to the gradient of z with respect to x.

z = tf.log(y)
with tf.Session() as sess:
    result_z = sess.run(z,  {input_x: np.array([1,2])})
    print('z =', result_z)
    delta_z = tf.gradients(z, [y, input_x])
    grad_z_y, grad_z_x = sess.run(delta_z,  {input_x: np.array([1,2])})
    print('Gradient with respect to y', grad_z_y)
    print('Gradient with respect to x', grad_z_x)
    print('Manual chain rule', grad_z_y * grad_y_x)
z = [1.609438  2.9957323]
Gradient with respect to y [0.2  0.05]
Gradient with respect to x [2. 1.]
Manual chain rule [[2. 1.]]

Gradient descent method

As mentioned before the gradient is very useful if we need to minimize a loss function.

It’s like hiking down a hill, we walk step by step into the direction of the steepest descent and finally we reach the valley. The gradient provides us the information in which direct we need to walk.

So if we want to minimize the function
f (e.g. root mean squared error, negative likelihood, …) we can apply an iterative algorithm

x_n = x_{n-1} - \gamma  \nabla f(x_{n-1}),

with a starting point $x_0$. These kind of methods are called gradient descent methods.

Under particular circumstances we can be sure that we reach the global minimum but in general this is not true. Sometimes it can happen that we reach a local minima or a plateau.
To aviod stucking in local minima there are plenty extensions to the plain vanilla gradient descent (e.g. simulated annealing). In Machine Learning literature the dradient descent method is often called Batch Gradient method, because you will use all data points to calculate the gradients.

We will usually multiply the gradient with a factor before we subtract it from our previous value, the so called learning rate. If the learning rate is too large, we will make large steps into the direction but it can happen that we step over the minimum and miss it. If the learning rate is too small the algorithm takes longer to converge. There are extensions which adept the learning rate to the parameters (e.g ADAM, RMSProp or AdaGrad) to achive faster and better convergence (see for example http://ruder.io/optimizing-gradient-descent/index.html).

Example Batch Gradient descent

Lets see how to use it on a linear regression problem. We generate 1000 random observations y = 2 x_1 + 3x_2 * \epsilon , with \epsilon normal distributed with zero mean and a standard deviation of 0.2.

#Generate data
np.random.seed(42)
eps = 0.2 * np.random.randn(1000)
x = np.random.randn(2,1000)
y = 2 * x[0,:] + 3 * x[1,:] + eps

We use a simple linear model to predict y. Our model is

\hat{y_i} = w_1 x_{i,1} + w_2 x_{i,2},

for an observation x_i and we want to minimize the mean squared error of our predictions

\frac{1}{1000}  \sum (y_i-\hat{y_i})^2.

Clearly we could use the well known least square estimators for the weights, but we want to minimize the error with a gradient descent method in TensorFlow.

We use the tf.Variableclass to store the parameters $w$ which we want to learn (estimate) from the data. We specify the shape of the tensor, through the intial values. The inital values are the starting point of our minimization.

Since we have a linear model, we can represent our model with an single matrix multiplication of our observation matrix (row obs, columns features) with our weight (parameter) matrix w.

# Setup the computational graph with loss function
input_x = tf.placeholder(tf.float32, shape=(2,None))
y_true = tf.placeholder(tf.float32, shape=(None,))
w = tf.Variable(initial_value=np.ones((1,2)), dtype=tf.float32)
y_hat = tf.matmul(w, input_x)
loss = tf.reduce_mean(tf.square(y_hat - y_true))

In the next step we are going to apply our batch gradient descent algorithm.

We define gradient of the loss with respect to our weights w grad_loss_w.

We also need to initialize our weights with the inital value (starting point our optimization). TensorFlow has a operator for this tf.global_variables_initializer(). In our session we run the initialization operator first. And then we can apply our algorithm.

We calculate the gradient and apply it to our weights with the function assign().

grad_loss_w = tf.gradients(loss, [w])
init = tf.global_variables_initializer()
losses = np.zeros(10)
with tf.Session() as sess:
    # Initialize the variables
    sess.run(init)
    # Gradient descent
    for i in range(0,10):
        # Calculate gradient
        dloss_dw = sess.run(grad_loss_w, {input_x:x,
                                          y_true:y})
        # Apply gradient to weights with learning rate
        sess.run(w.assign(w - 0.1 * dloss_dw[0]))
        # Output the loss
        losses[i] =  sess.run(loss, {input_x:x,
                                     y_true:y})
        print(i+1, 'th Step, current loss: ', losses[i])
    print('Found minimum', sess.run(w))
plt.plot(range(10), losses)
plt.title('Loss')
plt.xlabel('Iteration')
_ = plt.ylabel('RMSE')

Unknown-4

Luckily we don’t need to program everytime the same algorithm by ourself. TensorFlow provide many of gradient descent algorithms, e.g.
tf.train.GradientDescentOptimizer, tf.train.AdagradDAOptimizer or tf.train.RMSPropOptimizer (to mention a few). They compute the gradient and apply it to the weights automatically.

In the case of the GradientDescentOptimizerwe only need to specify the learning rate and tell the optimizer which loss function we want to minimize.

We call the method minimize which returns our training or optimization operator. In our loop we just need to run the operator.

tf.train.RMSPropOptimizer
optimizer = tf.train.GradientDescentOptimizer(0.1)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
losses = np.zeros(10)
with tf.Session() as sess:
    # Initialize the variables
    sess.run(init)
    # Gradient descent
    for i in range(0,10):
        _, losses[i] =  sess.run([train, loss], {input_x:x,
                                     y_true:y})
    print('Found minimum', sess.run(w))
plt.plot(range(10), losses)
plt.title('Loss')
plt.xlabel('Iteration')
_ = plt.ylabel('RMSE')

Stochastic Gradient Descent and Mini-Batch Gradient

One extension to batch gradient descent is the stochastic gradient descent. Instead of calculate the gradient for all observation we just randomly pick one observation (without replacement) an evaluate the gradient at this point. We repeat this until we used all data points, we call this an epoch. We repeat that process for several epochs.

Another variant use more than one random data point per gradient. Its the so called mini-batch gradient. Please feel free to play with the batch_size and the learning rate to see the effect of the optimization. One advantage is that we don’t need to keep all data in memory for optimization, especially if we talking about big data. We just need to load small batches at once to calculate the gradient.

np.random.seed(42)
optimizer = tf.train.GradientDescentOptimizer(0.1)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
n_epochs = 10
batch_size = 25
losses = np.zeros(n_epochs)
with tf.Session() as sess:
    # Initialize the variables
    sess.run(init)
    # Gradient descent
    indices = np.arange(x.shape[1])
    for epoch in range(0,n_epochs):
        np.random.shuffle(indices)
        for i in range(int(np.ceil(x.shape[1]/batch_size))):
            idx = indices[i*batch_size:(i+1)*batch_size]
            x_i = x[:,idx]
            x_i = x_i.reshape(2,batch_size)
            y_i = y[idx]
            sess.run(train, {input_x: x_i, 
                             y_true:y_i})
        
        if epoch%1==0: 
            loss_i = sess.run(loss, {input_x: x, 
                             y_true:y})
            print(epoch, 'th Epoch Loss: ', loss_i)
        loss_i = sess.run(loss, {input_x: x, 
                             y_true:y})
        losses[epoch]=loss_i
    print('Found minimum', sess.run(w))
plt.plot(range(n_epochs), losses)
plt.title('Loss')
plt.xlabel('Iteration')
_ = plt.ylabel('RMSE')
Found minimum [[1.9929324 2.9882016]]

Unknown-3

Our minimisation algorithm found a solution very very close to the real values.

Logistic Regression in TensorFlow

Now we have all tools to build our Logistic Regression model in TensorFlow.
Its quite similar to our previous toy example. Logisitc regression is also a kind of linear model, it belong to the class of generalized linear models with with the logit as a link function. As we have seen in the previous part we assume in logistic regression that the logits (logarithm of the odds) are linear in the parameters/weights.

Our data set has 30 features, so we adjust the placeholders and the weights accordingly. We have seen that the minimizing the cross entropy is aquivalent to maximizing the likelihood function.. TensorFlow provides us with the loss function sigmod_cross_entropy, so we don’t need to implement the loss function by ourself (let us use this little shortcut, the cross entropy or negative log likelihood is quite easy to implement). The loss function takes the logits and the true lables (response) as inputs. It computes the entropy elementwise, so we have to take the mean or sum of the output of the loss function.

# Setup the computational graph with loss function
input_x = tf.placeholder(tf.float32, shape=(None, 30))
y_true = tf.placeholder(tf.float32, shape=(None,1))
w = tf.Variable(initial_value=tf.random_normal((30,1), 0, 0.1, seed=42), dtype=tf.float32)
logit = tf.matmul(input_x, w)
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y_true, logits=logit))

To get the prediction we have to use the sigmoid function on the logits.

y_prob = tf.sigmoid(logit)

For the training we can almost reuse the code of the gradient descent example. We just need to adjust the number of iterations (100, feel free to play with this parameter). and the function call. In each iteration we call our training operator, calculate the current loss and the current probabilities and store the information to visualize the training.

Every ten epoch we print the current loss and AUC score.

optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
n_epochs = 100
losses = np.zeros(n_epochs)
aucs = np.zeros(n_epochs)
with tf.Session() as sess:
    # Initialize the variables
    sess.run(init)
    # Gradient descent
    for i in range(0,n_epochs):
        _, iloss, y_hat =  sess.run([train, loss, y_prob], {input_x: X_train,
                                                           y_true: y_train.reshape(y_train.shape[0],1)})
        losses[i] = iloss
        aucs[i] = roc_auc_score(y_train, y_hat)
        if i%10==0:
            print('%i th Epoch Train AUC: %.4f Loss: %.4f' % (i, aucs[i], losses[i]))
    
    # Calculate test auc
    y_test_hat =  sess.run(y_prob, {input_x: X_test,
                                             y_true: y_test.reshape(y_test.shape[0],1)})
    weights = sess.run(w)
0 th Epoch Train AUC: 0.1518 Loss: 0.7446
10 th Epoch Train AUC: 0.8105 Loss: 0.6960
20 th Epoch Train AUC: 0.8659 Loss: 0.6906
30 th Epoch Train AUC: 0.9640 Loss: 0.6893
40 th Epoch Train AUC: 0.9798 Loss: 0.6884
50 th Epoch Train AUC: 0.9816 Loss: 0.6876
60 th Epoch Train AUC: 0.9818 Loss: 0.6868
70 th Epoch Train AUC: 0.9818 Loss: 0.6861
80 th Epoch Train AUC: 0.9819 Loss: 0.6853
90 th Epoch Train AUC: 0.9820 Loss: 0.6845

Unknown-2

The AUC score of the test data is around 98%.

Since we have only 30 features we can easily visualize the influence of each feature in our model.

Unknown

So thats it for today. I hoped you enjoyed reading the post. Please download or fork the notebook on GitHub or on kaggle and play with the code and change the parameters.

In the next post we will add a hidden layer to our model and build a neural network. And we will see how to use the High-Level API to build the same model much easier. If you want to learn more about gradient descent and optimization have a look into the following links

So long.

Some note: I will separate the data science related notebooks from the quant related notebooks on GitHub. In future you will find all Machine Learning, Deep Learning related notebook in a own repository, while the Quant notebooks will be stay in the old repo.

From Logistic Regression in SciKit-Learn to Deep Learning with TensorFlow – A fraud detection case study – Part I

Its quite a long time since my last post. It has been a busy time for me and a lot of things changed. One obvious change was the overdue change of my blog title from Ipython to Jupyter notebooks. Also has my work life shifted over the time from a Quant to a Data Scientist role. The most recent change is that relocated to Bangkok this year. I am still working in the finance industry and now lead a small Data Science team in the Operational Risk area.

After getting used to the new life here in South East Asia I’ve decided to continue my blog. But there will be slight change in the topics, I plan to write more about Machine Learning and Deep Learning in Python and R and less about pricing and XVAs. But I will have also the chance to look into some pricing model validation in my new role, so there is the chance that there will be some quant related postings coming as well.

In the next three coming posts, we will see how to build a fraud detection (classification) system with TensorFlow. We will start to build a logistic regression classifier in SciKit-Learn (sklearn). In the next step will build a logistic regression classifier in TensorFlow from scratch. In the 3rd post we will add a hidden layer to our logistic regression and build a neural network.

You can find the complete source code on GitHub or on kaggle.

For this example we use public available real world data set. You can find the data on kaggle. The datasets contains transactions made by credit cards in September 2013 by european cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. It contains only numerical input variables which are the result of a PCA transformation. Due to confidentiality issues, there are no more background information about the data. Features V1, V2, … V28 are the principal components obtained with PCA.

Install requirements

Easiest way is to install the Anaconda Python distribution from Anaconda Cloud. Windows, Mac and Linux is supported. I use the Python 3.6 64 bit Version. Most of the required packages come already with the basic installation. To install Keras and TensorFlow open the Anaconda prompt (shell) and install the missing packages via conda (package manager, similar to apt-get in ubuntu):

conda install -c conda-forge keras tensorflow

Fraud detection with logistic regression in Scikit-Learn

First we load a required libraries and functions

 This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in 

import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_curve, roc_auc_score, classification_report, accuracy_score, confusion_matrix 
import seaborn as sns
import matplotlib.pyplot as plt

# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory

import os
print(os.listdir("../input"))

# Any results you write to the current directory are saved as output.

Pandas is used for loading the data and a powerful libraries for data wrangling. If you are not familiar with pandas check out the tutorials on the pandas project website. numpy is the underlying numerical library for pandas and scikit-learn. seaborn and matplotlib are used for visualisation.

Load and visualize the data

First we load the data and try to get an overview of the data.

We load the csv-file with the command read_csv and store it as data-frame in our memory.

credit_card = pd.read_csv('../input/creditcard.csv')

Next we try to get an overview of the fraud vs non-fraud distribution, we going to use the seaborn countplot function to produce bar chart.

f, ax = plt.subplots(figsize=(7, 5))
sns.countplot(x='Class', data=credit_card)
_ = plt.title('# Fraud vs NonFraud')
_ = plt.xlabel('Class (1==Fraud)')

inbalance_class.png

We can not even see the bar chart of the fraud cases. As we can see we have mostly non-fraudulent transactions. Such a problem is also called inbalanced class problem.

99.8% of all transactions are non-fraudulent. The easiest classifier would always predict no fraud and would be in almost all cases correct. Such classifier would have a very high accuracy but is quite useless.

For such an inbalanced classes we could use over or undersampling methods to try to balance the classes (see inbalance-learn for example: https://github.com/scikit-learn-contrib/imbalanced-learn), but this out of the scope of todays post. We will come back to this in a later post.

As accuracy is not very informative in this case, the AUC (Aera under the curve) a better metric to assess the model quality. The AUC score is in a two class classification class equal to the probability that our classifier will detect a fraudulent transaction given one fraudulent and genuine transaction to choice from. Guessing would have a probability of 50%.

We create now the feature matrix X and the result vector y. We drop the column Class from the data frame and store it in a new data frame X and we select the column Class as our vector y.

X = credit_card.drop(columns='Class', axis=1)
y = credit_card.Class.values

Due to the construction of the dataset (PCA transformed features, which minimizes the correlation between factors), we dont have any highly correlated features. Multicolinearity could cause problems in a logisitc regression.

To test for multicolinearity one could look into the correlation matrix (works only for non categorical features, which we do today) or run partial regressions and compare the standard errors or use pseudo-R^2 values and calculate Variance-Inflation-Factors.


corr = X.corr()

mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True

cmap = sns.diverging_palette(220, 10, as_cmap=True)

# Draw the heatmap with the mask and correct aspect ratio
f, ax = plt.subplots(figsize=(11, 9))
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
            square=True, linewidths=.5, cbar_kws={"shrink": .5})

heat_map.png

Short reminder of Logistic Regression

In Logisitic Regression the logits (logs of the odds) are assumed to be a linear function of the features

L=\log(\frac{P(Y=1)}{1-P(Y=1)}) = \beta_0 + \sum_{i=1}^n \beta_i X_i.

Solving this equatation for p=P(Y=1) yields to

p = \frac{\exp(L)}{1-\exp(L)}.

The parameters \beta_i can be derived by Maximum Likelihood Estimation (MLE). The likelihood for a given m observation Y_j is

lkl = \prod_{j=1}^m p^{Y_j}(1-p)^{1-Y_j}.

To find the maximum of the likelihood is equivalent to the minimize the negative logarithm of the likelihood (loglikelihood).

-llkh = -\sum_{j=1}^m Y_j \log(p) + (1-Y_j) \log(1-p),

which is numerical more stable. The log-likelihood function has the same form as the cross-entropy error function for a discrete case.

So finding the maximum likelihood estimator is the same problem as minimizing the average cross entropy error function.

In SciKit-Learn uses by default a coordinate descent algorithm to find the minimum of L2 regularized version of the loss function (see. http://scikit-learn.org/stable/modules/linear_model.html#logistic-regression).

The main difference between L1 (Lasso) and L2 (Ridge) regulaziation is, that the L1 prefer a sparse solution (the higher the regulazation parameter the more parameter will be zero) while L2 enforce small parameter values.

Train the model

Training and test set

First we split our data set into a train and a validation set by using the function train_test_split.

np.random.seed(42)
X_train, X_test, y_train, y_test = train_test_split(X, y)

Model definition

scaler = StandardScaler()
lr = LogisticRegression()
model1 = Pipeline([('standardize', scaler),
                    ('log_reg', lr)])
model1.fit(X_train, y_train)

As preperation we standardize our features to have zero mean and a unit standard deviation. The convergence of gradient descent algorithm are better. We use the class StandardScaler. The class StandardScaler has the method fit_transform() which learn the mean \mu_i and standard deviation $\sigma_i$ of each feature $i$ and return a standardized version \frac{x_i - \mu_i}{\sigma}. We learn the mean and sd on the training data. We can apply the same standardization on the test set with the function transform().

The logistic regression is implemented in the class LogisticRegression, we will use for now the default parameterization. The model can be fit using the function fit(). After fitting the model can be used to make predicitons predict() or return the estimated the class probabilities predict_proba().

We combine both steps into a Pipeline. The pipline performs both steps automatically. When we call the method fit() of the pipeline, it will invoke the method fit_and_transform() for all but the last step and the method fit() of the last step, which is equivalent to lr.fit(scaler.fit_transform(X_train), y_train)

If we invoke the method predict() of the pipeline its equvivalent to lr.predict(scaler.transform(X_train)).

Training score and Test score

confusion_matrix() returns the confusion matrix, C where $C_{0,0}$ are the true negatives (TN) and $C_{0,1}$ the false positives (FP) and vice-versa for the positives in the 2nd row. We use the function accurary_score() to calculate the accuracy our models on the train and test data. We see that the accuracy is quite high (99,9%) which is expected in such an unbalanced class problem. With the method roc_auc_score()can we get the area under the receiver-operator-curve (AUC) for our simple model.

y_train_hat = model1.predict(X_train)
y_train_hat_probs = model1.predict_proba(X_train)[:,1]
train_accuracy = accuracy_score(y_train, y_train_hat)*100
train_auc_roc = roc_auc_score(y_train, y_train_hat_probs)*100
print('Confusion matrix:\n', confusion_matrix(y_train, y_train_hat))
print('Training accuracy: %.4f %%' % train_accuracy)
print('Training AUC: %.4f %%' % train_auc_roc)
Confusion matrix:
 [[213200     26]
 [   137    242]]
Training accuracy: 99.9237 %
Training AUC: 98.0664 %
y_test_hat = model1.predict(X_test)
y_test_hat_probs = model1.predict_proba(X_test)[:,1]
test_accuracy = accuracy_score(y_test, y_test_hat)*100
test_auc_roc = roc_auc_score(y_test, y_test_hat_probs)*100
print('Confusion matrix:\n', confusion_matrix(y_test, y_test_hat))
print('Training accuracy: %.4f %%' % test_accuracy)
print('Training AUC: %.4f %%' % test_auc_roc)
Confusion matrix:
 [[71077    12]
 [   45    68]]
Training accuracy: 99.9199 %
Training AUC: 97.4810 %

Our model is able to detect 68 fraudulent transactions out of 113 (recall of 60%) and produce 12 false positives (<0.02%) on the test data.

To visualize the Receiver-Operator-Curve we use the function roc_curve. The method returns the true positive rate (recall) and the false positive rate (probability for a false alarm) for a bunch of different thresholds. This curve shows the trade-off between recall (detect fraud) and false alarm probability.

If we classifiy all transaction as fraud, we would have a recall of 100% but also the highest false alarm rate possible (100%). The naive way to minimize the false alarm probability is to classify all transaction as genuine. **


fpr, tpr, thresholds = roc_curve(y_test, y_test_hat_probs, drop_intermediate=True)

f, ax = plt.subplots(figsize=(9, 6))
_ = plt.plot(fpr, tpr, [0,1], [0, 1])
_ = plt.title('AUC ROC')
_ = plt.xlabel('False positive rate')
_ = plt.ylabel('True positive rate')
plt.style.use('seaborn')

plt.savefig('auc_roc.png', dpi=600)

auc_roc.png

Our model classify all transaction with a fraud probability => 50% as fraud. If we choose the threshold higher, we could reach a lower false positive rate but we would also miss more fraudulent transactions. If we choose the thredhold lower we can catch more fraud but need to investigate more false positives.

Depending on the costs for each error, it make sense to select another threshold.

If we set the threshold to 90% the recall decrease from 60% to 45%. while the false positve rate is the same. We can see that our model assign some non-fraudulent a very high probability to be fraud.

y_hat_90 = (y_test_hat_probs > 0.90 )*1
print('Confusion matrix:\n', confusion_matrix(y_test, y_hat_90))
print(classification_report(y_test, y_hat_90, digits=6))

If we set the threshold down to 10%, we can detect around 75% of all fraud case but almost double our false positive rate (now 25 false alarms)

Confusion matrix:
 [[71064    25]
 [   25    88]]
             precision    recall  f1-score   support

          0     0.9996    0.9996    0.9996     71089
          1     0.7788    0.7788    0.7788       113

avg / total     0.9993    0.9993    0.9993     71202

Where to go from here?

We just scratched the surface of sklearn and logistic regression. For example we could spent much more time with the

  • feature selection / engineering (which is a bit hard without any background information about the features),
  • we could try techniques to counter the data inbalance and
  • we could use cross-validation to fine tune the hyperparameters or
  • try a different regularization (Lasso/Elastic Net) or
  • try a different optimizer (stochastic gradient descent or mini-batch sgd)
  • adjust class weights to adjust the decision boundary (make missed frauds more expansive in the loss function)
  • and finally we could try different classifer models in sklearn like decision trees, random forrests, knn, naive bayes or support vector machines.

But for now we will stop here and we will implement in the next part the logisitc regression model with stochastic gradient descent in TensorFlow and then extend it to a neural net and we will come back to these points at a later time. But in the mean time feel free to play with the notebook and try to change the parameter and see how the model will change.

So long…