To learn classification with keras and containerizing it, we will devide this task in 7 simple parts-

  1. Introduction with Keras
  2. Learning to program with Keras
  3. Multiclass classification with keras
  4. Layers and Optimization
  5. Saving model and weights
  6. Creating docker file for application
  7. Pushing to Dockerhub

Introduction

Keras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow. It was developed with a focus on enabling fast experimentation.
Keras is the high-level API of TensorFlow 2.0: an approchable, highly-productive interface for solving machine learning problems, with a focus on modern deep learning. It provides essential abstractions and building blocks for developing and shipping machine learning solutions with high iteration velocity.

Installing keras

To install using pip:
pip install Keras

To install using conda:
conda install -c conda-forge keras

Learning to program with Keras

The simplest type of model is the Sequential model, a linear stack of layers. It can be easily loaded by:

from tensorflow.keras.models import Sequential

model = Sequential()

To add layers one can simply use .add() with the model:

from tensorflow.keras.layers import Dense

model.add(Dense(units=64, activation='relu'))

To compile that model simply .compile() can be used, like:

model.compile(loss='categorical_crossentropy',
              optimizer='sgd',
              metrics=['accuracy'])

And finally we can iterate our model in batches using .fit():

model.fit(x_train, y_train, epochs=5, batch_size=32)

To evaluate accuracy and loss, .evaluate() can be used like:

loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128)

So we can see that making model, adding layers and evaluation becomes very easy by using Keras.

Multiclass classification with keras

To start with I chose very basic fashion MNIST dataset. Fashion-MNIST is a dataset of Zalando’s article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28×28 grayscale image, associated with a label from 10 classes.

So we start with loading modules:

#importing modules

from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf  
import tensorflow.keras as ks



#Loading dataset

mnist_fashion= ks.datasets.fashion_mnist
(training_images, training_labels), (test_images,test_labels) = mnist_fashion.load_data()

#Data exploration

print('Training Dataset Shape: ­{}'.format(training_images.shape))
print('No. of Training Dataset Labels: {}'.format(len(training_labels)))
print('Test Dataset Shape: {}'.format(test_images.shape))
print('No. of Test Dataset Labels: {}'.format(len(test_labels)))

Scaling pixel values from 0 to 1 and then reshapint the matrices into 28x28x1 array.

training_images= training_images/255.0
test_images=test_images/255.0

# reshaping

training_images = training_images.reshape((60000,28,28,1))
test_images = test_images.reshape((10000,28,28,1))

# exploring data again
print('Training Dataset Shape: ­{}'.format(training_images.shape))
print('No. of Training Dataset Labels: {}'.format(len(training_labels)))
print('Test Dataset Shape: {}'.format(test_images.shape))
print('No. of Test Dataset Labels: {}'.format(len(test_labels)))

#Building layers of model

cnn_model = ks.models.Sequential()
cnn_model.add(ks.layers.Conv2D(50, (3, 3), activation='relu', input_shape=(28, 28, 1), name='Conv2D_layer'))  # First layers as convolutional layer with Relu activation fn

# adding second layer as pooling layer "maxpooling"
cnn_model.add(ks.layers.MaxPooling2D((2, 2), name='Maxpooling_2D'))

#third fully connected layer
cnn_model.add(ks.layers.Flatten(name='Flatten'))
cnn_model.add(ks.layers.Dense(50, activation='relu',name='Hidden_layer'))
cnn_model.add(ks.layers.Dense(10, activation='softmax',name='Output_layer'))
cnn_model.summary()

cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# fitting the model

cnn_model.fit(training_images, training_labels, epochs=10)

After 10 epoches I got Training Accuracy 0.97 and Test Accuracy 0.91.

So we can see that keras makes classification very easy and quick task.

Layers and Optimization

As my task was to classify dataset with 10 classes, I had to add layers to increase the accuracy. I added first layer with `relu` activation and some further with again `relu` activation and finally output layer with `softmax` activation.

  1. `selu`: (Scaled Exponential Linear Unit) : Basically, the SELU activation function multiplies scale (> 1) with the output of the tf.keras.activations.elu function to ensure a slope larger than one for positive inputs.
  2. `relu`: Applies the rectified linear unit activation function. With default values, this returns the standard ReLU activation: max(x, 0), the element-wise maximum of 0 and the input tensor.
  3. `softmax` : Softmax converts a real vector to a vector of categorical probabilities.The elements of the output vector are in range (0, 1) and sum to 1.

After adding these layers, if want to increase efficiency one can use data augmentation to increase the training datset.

Saving model and weights

Saving our model and weights is very important, as there may be a case when loss starts increasing, then if we have saved weights of previous epoch, then it will be really safe.
To save our model, we just need:
from keras.models import model_from_json

And then a simple code like this will save our model and weights:

cnn_model_json = cnn_model.to_json()
with open("cnn_model.json", "w") as json_file:
    json_file.write(cnn_model_json)
# serialize weights to HDF5
cnn_model.save_weights("cnn_model.h5")
print("Saved model to disk")

Our model will be saved in cnn_model.h5 and weights in cnn_model.json.

Creating docker file for application

To make boild docker image we need to make a docker file first. While making docker file we must remember that all dependencies and data which will be used in inference model, should be added here.
This can be done by some simple commands like:

RUN can be used to install a dependency like keras, so in dockerfile we will write:

RUN pip install keras

This command will install keras and similarly all needed libraries should be installed.

Similarly COPY can be used to copy code, ADD src destination can be used to add any directory like big datasets and CMD can be used to run the program.
Below is example of dockerfile:

FROM python:latest

RUN pip install scipy
RUN pip install h5py
RUN pip install Keras
RUN pip install numpy
RUN pip install opencv-python
RUN pip install scikit-learn
RUN pip install --upgrade tensorflow
COPY filename.py /

CMD [ "python", "./filename.py" ]

Pushing to Dockerhub

After creating docker file, we will use docker build -t $DOCKER_ACC/$DOCKER_REPO:$IMG_TAG to build image and then login to account using command docker login --username=yourhubusername --password=yourpassword finally we can push our image to dockerhub using docker push $DOCKER_ACC/$DOCKER_REPO:$IMG_TAG .

And now our application is available publically for which can be simpely pulled and run using.

docker pull username/repository:tag

#to run
docker run username/repository:tag

Summary

We successfully created a classification model, containarized it and pushed it to dockerhub, and made it publicaly available.