KERAS and MNIST#

import matplotlib.pyplot as plt
import numpy as np

We’ll apply the ideas we just learned to a neural network that does character recognition using the MNIST database. This is a set of handwritten digits (0–9) represented as a 28×28 pixel grayscale image.

There are 2 datasets, the training set with 60,000 images and the test set with 10,000 images.

import keras

Important

Keras requires a backend, which can be tensorflow, pytorch, or jax.

By default, it will assume tensorflow.

This notebook has been tested with both pytorch and tensorflow.

Tip

To have keras use pytorch, set the environment variable KERAS_BACKEND as:

export KERAS_BACKEND="torch"

We follow the example for setting up the network: Vict0rSch/deep_learning

Note

For visualization of the network, you need to have pydot installed.

The MNIST data#

The keras library can download the MNIST data directly and provides a function to give us both the training and test images and the corresponding digits. This is already in a format that Keras wants, so we don’t use the classes that we defined earlier.

from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
       0/11490434 ━━━━━━━━━━━━━━━━━━━━ 0s 0s/step

  327680/11490434 ━━━━━━━━━━━━━━━━━━━━ 1s 0us/step

 4513792/11490434 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step

10133504/11490434 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step

11490434/11490434 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step

As before, the training set consists of 60000 digits represented as a 28x28 array (there are no color channels, so this is grayscale data). They are also integer data.

X_train.shape
(60000, 28, 28)
X_train.dtype
dtype('uint8')

Let’s look at the first digit and the “y” value (target) associated with it—that’s the correct answer.

plt.imshow(X_train[0], cmap="gray_r")
print(y_train[0])
5
the number 5 represented as a small grayscale image

Preparing the Data#

The neural network takes a 1-d vector of input and will return a 1-d vector of output. We need to convert our data to this form.

We’ll scale the image data to fall in [0, 1) and the numerical output to be categorized as an array. Finally, we need the input data to be one-dimensional, so we fill flatten the 28x28 images into a single 784 vector.

X_train = X_train.astype('float32')/255
X_test = X_test.astype('float32')/255

X_train = np.reshape(X_train, (60000, 784))
X_test = np.reshape(X_test, (10000, 784))
X_train[0]
array([0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.01176471, 0.07058824, 0.07058824,
       0.07058824, 0.49411765, 0.53333336, 0.6862745 , 0.10196079,
       0.6509804 , 1.        , 0.96862745, 0.49803922, 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.11764706, 0.14117648, 0.36862746, 0.6039216 ,
       0.6666667 , 0.99215686, 0.99215686, 0.99215686, 0.99215686,
       0.99215686, 0.88235295, 0.6745098 , 0.99215686, 0.9490196 ,
       0.7647059 , 0.2509804 , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.19215687, 0.93333334,
       0.99215686, 0.99215686, 0.99215686, 0.99215686, 0.99215686,
       0.99215686, 0.99215686, 0.99215686, 0.9843137 , 0.3647059 ,
       0.32156864, 0.32156864, 0.21960784, 0.15294118, 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.07058824, 0.85882354, 0.99215686, 0.99215686,
       0.99215686, 0.99215686, 0.99215686, 0.7764706 , 0.7137255 ,
       0.96862745, 0.94509804, 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.3137255 , 0.6117647 , 0.41960785, 0.99215686, 0.99215686,
       0.8039216 , 0.04313726, 0.        , 0.16862746, 0.6039216 ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.05490196,
       0.00392157, 0.6039216 , 0.99215686, 0.3529412 , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.54509807,
       0.99215686, 0.74509805, 0.00784314, 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.04313726, 0.74509805, 0.99215686,
       0.27450982, 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.13725491, 0.94509804, 0.88235295, 0.627451  ,
       0.42352942, 0.00392157, 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.31764707, 0.9411765 , 0.99215686, 0.99215686, 0.46666667,
       0.09803922, 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.1764706 ,
       0.7294118 , 0.99215686, 0.99215686, 0.5882353 , 0.10588235,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.0627451 , 0.3647059 ,
       0.9882353 , 0.99215686, 0.73333335, 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.9764706 , 0.99215686,
       0.9764706 , 0.2509804 , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.18039216, 0.50980395,
       0.7176471 , 0.99215686, 0.99215686, 0.8117647 , 0.00784314,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.15294118,
       0.5803922 , 0.8980392 , 0.99215686, 0.99215686, 0.99215686,
       0.98039216, 0.7137255 , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.09411765, 0.44705883, 0.8666667 , 0.99215686, 0.99215686,
       0.99215686, 0.99215686, 0.7882353 , 0.30588236, 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.09019608, 0.25882354, 0.8352941 , 0.99215686,
       0.99215686, 0.99215686, 0.99215686, 0.7764706 , 0.31764707,
       0.00784314, 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.07058824, 0.67058825, 0.85882354,
       0.99215686, 0.99215686, 0.99215686, 0.99215686, 0.7647059 ,
       0.3137255 , 0.03529412, 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.21568628, 0.6745098 ,
       0.8862745 , 0.99215686, 0.99215686, 0.99215686, 0.99215686,
       0.95686275, 0.52156866, 0.04313726, 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.53333336, 0.99215686, 0.99215686, 0.99215686,
       0.83137256, 0.5294118 , 0.5176471 , 0.0627451 , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        ], dtype=float32)

We will use categorical data. Keras includes routines to categorize data. In our case, since there are 10 possible digits, we want to put the output into 10 categories (represented by 10 neurons)

from keras.utils import to_categorical

y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

Now let’s look at the target for the first training digit. We know from above that it was ‘5’. Here we see that there is a 1 in the index corresponding to 5 (remember we start counting at 0 in python).

y_train[0]
array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.])

Build the Neural Network#

Now we’ll build the neural network. We will have 2 hidden layers, and the number of neurons will look like:

784 → 500 → 300 → 10

Layers#

Let’s start by setting up the layers. For each layer, we tell keras the number of output neurons. It infers the number of inputs from the previous layer (with the exception of the input layer, where we need to tell it what to expect as input).

Properties on the layers:

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Input

model = Sequential()
model.add(Input(shape=(784,)))
model.add(Dense(500, activation="relu"))
model.add(Dropout(0.4))
model.add(Dense(300, activation="relu"))
model.add(Dropout(0.4))
model.add(Dense(10, activation="softmax"))

Loss function#

We need to specify what we want to optimize and how we are going to do it.

Recall: the loss (or cost) function measures how well our predictions match the expected target. Previously we were using the sum of the squares of the error.

For categorical data, like we have, the “cross-entropy” metric is often used. See here for an explanation: https://jamesmccaffrey.wordpress.com/2013/11/05/why-you-should-use-cross-entropy-error-instead-of-classification-error-or-mean-squared-error-for-neural-network-classifier-training/

Optimizer#

We also need to specify an optimizer. This could be gradient descent, as we used before. Here’s a list of the optimizers supoprted by keras: https://keras.io/api/optimizers/ We’ll use RMPprop, which builds off of gradient descent and includes some momentum.

Finally, we need to specify a metric that is evaluated during training and testing. We’ll use "accuracy" here. This means that we’ll see the accuracy of our model reported as we are training and testing.

More details on these options is here: https://keras.io/api/models/model/

from keras.optimizers import RMSprop

rms = RMSprop()
model.compile(loss='categorical_crossentropy',
              optimizer=rms, metrics=['accuracy'])

Network summary#

Let’s take a look at the network:

model.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                     Output Shape                  Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ dense (Dense)                   │ (None, 500)            │       392,500 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout (Dropout)               │ (None, 500)            │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_1 (Dense)                 │ (None, 300)            │       150,300 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout_1 (Dropout)             │ (None, 300)            │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_2 (Dense)                 │ (None, 10)             │         3,010 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 545,810 (2.08 MB)
 Trainable params: 545,810 (2.08 MB)
 Non-trainable params: 0 (0.00 B)

We see that there are > 500k parameters that we will be training

Train#

For training, we pass in the inputs and target and the number of epochs to run and it will optimize the network by adjusting the weights between the nodes in the layers.

The number of epochs is the number of times the entire data set is passed forward and backward through the network. The batch size is the number of training pairs you pass through the network at a given time. You update the parameter in your model (the weights) once for each batch. This makes things more efficient and less noisy.

epochs = 20
batch_size = 256
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size,
          validation_data=(X_test, y_test), verbose=2)
Epoch 1/20
235/235 - 4s - 17ms/step - accuracy: 0.8846 - loss: 0.3777 - val_accuracy: 0.9544 - val_loss: 0.1499
Epoch 2/20
235/235 - 4s - 17ms/step - accuracy: 0.9517 - loss: 0.1601 - val_accuracy: 0.9682 - val_loss: 0.1046
Epoch 3/20
235/235 - 4s - 17ms/step - accuracy: 0.9645 - loss: 0.1176 - val_accuracy: 0.9749 - val_loss: 0.0836
Epoch 4/20
235/235 - 4s - 17ms/step - accuracy: 0.9695 - loss: 0.0965 - val_accuracy: 0.9770 - val_loss: 0.0748
Epoch 5/20
235/235 - 4s - 17ms/step - accuracy: 0.9751 - loss: 0.0825 - val_accuracy: 0.9769 - val_loss: 0.0706
Epoch 6/20
235/235 - 4s - 17ms/step - accuracy: 0.9772 - loss: 0.0720 - val_accuracy: 0.9793 - val_loss: 0.0679
Epoch 7/20
235/235 - 4s - 17ms/step - accuracy: 0.9802 - loss: 0.0646 - val_accuracy: 0.9808 - val_loss: 0.0656
Epoch 8/20
235/235 - 4s - 18ms/step - accuracy: 0.9819 - loss: 0.0587 - val_accuracy: 0.9814 - val_loss: 0.0634
Epoch 9/20
235/235 - 4s - 17ms/step - accuracy: 0.9834 - loss: 0.0522 - val_accuracy: 0.9826 - val_loss: 0.0584
Epoch 10/20
235/235 - 4s - 17ms/step - accuracy: 0.9847 - loss: 0.0490 - val_accuracy: 0.9839 - val_loss: 0.0592
Epoch 11/20
235/235 - 4s - 17ms/step - accuracy: 0.9856 - loss: 0.0458 - val_accuracy: 0.9830 - val_loss: 0.0628
Epoch 12/20
235/235 - 4s - 17ms/step - accuracy: 0.9862 - loss: 0.0437 - val_accuracy: 0.9837 - val_loss: 0.0631
Epoch 13/20
235/235 - 4s - 18ms/step - accuracy: 0.9880 - loss: 0.0386 - val_accuracy: 0.9845 - val_loss: 0.0604
Epoch 14/20
235/235 - 4s - 16ms/step - accuracy: 0.9880 - loss: 0.0373 - val_accuracy: 0.9844 - val_loss: 0.0618
Epoch 15/20
235/235 - 4s - 18ms/step - accuracy: 0.9891 - loss: 0.0338 - val_accuracy: 0.9833 - val_loss: 0.0584
Epoch 16/20
235/235 - 4s - 18ms/step - accuracy: 0.9889 - loss: 0.0345 - val_accuracy: 0.9838 - val_loss: 0.0612
Epoch 17/20
235/235 - 4s - 17ms/step - accuracy: 0.9901 - loss: 0.0320 - val_accuracy: 0.9830 - val_loss: 0.0627
Epoch 18/20
235/235 - 4s - 17ms/step - accuracy: 0.9902 - loss: 0.0300 - val_accuracy: 0.9834 - val_loss: 0.0626
Epoch 19/20
235/235 - 4s - 17ms/step - accuracy: 0.9901 - loss: 0.0294 - val_accuracy: 0.9826 - val_loss: 0.0697
Epoch 20/20
235/235 - 4s - 17ms/step - accuracy: 0.9909 - loss: 0.0287 - val_accuracy: 0.9843 - val_loss: 0.0639
<keras.src.callbacks.history.History at 0x7f31e19d7a10>

Test#

keras has a routine, evaluate() that can take the inputs and targets of a test data set and return the loss value and accuracy (or other defined metrics) on this data.

Here we see we are > 98% accurate on the test data—this is the data that the model has never seen before (and was not trained with).

loss_value, accuracy = model.evaluate(X_test, y_test, batch_size=16)
print(accuracy)
  1/625 ━━━━━━━━━━━━━━━━━━━━ 4s 7ms/step - accuracy: 1.0000 - loss: 0.0022

  5/625 ━━━━━━━━━━━━━━━━━━━━ 13s 22ms/step - accuracy: 1.0000 - loss: 0.0012

 16/625 ━━━━━━━━━━━━━━━━━━━━ 5s 9ms/step - accuracy: 0.9945 - loss: 0.0107  

 27/625 ━━━━━━━━━━━━━━━━━━━━ 4s 7ms/step - accuracy: 0.9920 - loss: 0.0299

 38/625 ━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.9905 - loss: 0.0417

 49/625 ━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.9889 - loss: 0.0482

 60/625 ━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.9880 - loss: 0.0516

 71/625 ━━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.9874 - loss: 0.0546

 82/625 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.9867 - loss: 0.0589

 93/625 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.9859 - loss: 0.0632

104/625 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.9852 - loss: 0.0672

115/625 ━━━━━━━━━━━━━━━━━━━━ 2s 5ms/step - accuracy: 0.9846 - loss: 0.0706

116/625 ━━━━━━━━━━━━━━━━━━━━ 3s 7ms/step - accuracy: 0.9846 - loss: 0.0708

127/625 ━━━━━━━━━━━━━━━━━━━━ 3s 7ms/step - accuracy: 0.9842 - loss: 0.0733

139/625 ━━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - accuracy: 0.9837 - loss: 0.0761

150/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9832 - loss: 0.0782

156/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9830 - loss: 0.0793

166/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9826 - loss: 0.0808

175/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9823 - loss: 0.0821

181/625 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - accuracy: 0.9822 - loss: 0.0828

191/625 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - accuracy: 0.9819 - loss: 0.0839

201/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9817 - loss: 0.0849

212/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9815 - loss: 0.0857

223/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9813 - loss: 0.0864

234/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9812 - loss: 0.0871

246/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9810 - loss: 0.0879

258/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9808 - loss: 0.0887

270/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9806 - loss: 0.0892

282/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9805 - loss: 0.0897

293/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9803 - loss: 0.0901

304/625 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - accuracy: 0.9802 - loss: 0.0904

315/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9800 - loss: 0.0906

327/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9800 - loss: 0.0908

333/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9799 - loss: 0.0908

342/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9799 - loss: 0.0907

352/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9799 - loss: 0.0907

358/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9798 - loss: 0.0906

370/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9798 - loss: 0.0904

381/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9798 - loss: 0.0902

392/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9798 - loss: 0.0900

404/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9798 - loss: 0.0897

415/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9798 - loss: 0.0895

426/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9798 - loss: 0.0892

437/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9798 - loss: 0.0890

448/625 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.9799 - loss: 0.0887

459/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9799 - loss: 0.0883

470/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9800 - loss: 0.0880

481/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9800 - loss: 0.0876

492/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9801 - loss: 0.0872

503/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9801 - loss: 0.0868

507/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9802 - loss: 0.0866

518/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9802 - loss: 0.0862

526/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9803 - loss: 0.0859

534/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9803 - loss: 0.0856

535/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9803 - loss: 0.0856

546/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9804 - loss: 0.0852

557/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9805 - loss: 0.0848

568/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9805 - loss: 0.0844

579/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9806 - loss: 0.0840

590/625 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9807 - loss: 0.0836

601/625 ━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9808 - loss: 0.0832

612/625 ━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9808 - loss: 0.0828

624/625 ━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9809 - loss: 0.0825

625/625 ━━━━━━━━━━━━━━━━━━━━ 4s 6ms/step - accuracy: 0.9843 - loss: 0.0639
0.9843000173568726

Predicting#

Suppose we simply want to ask our neural network to predict the target for an input. We can use the predict() method to return the category array with the predictions. We can then use np.argmax() to select the most probable.

np.argmax(model.predict(np.array([X_test[0]])))
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step
np.int64(7)
y_test[0]
array([0., 0., 0., 0., 0., 0., 0., 1., 0., 0.])

Now let’s loop over the test set and print out what we predict vs. the true answer for those we get wrong. We can also plot the image of the digit.

wrong = 0
max_wrong = 10

for n, (x, y) in enumerate(zip(X_test, y_test)):
    try:
        res = model.predict(np.array([x]), verbose=0)
        if np.argmax(res) != np.argmax(y):
            print(f"test {n}: prediction = {np.argmax(res)}, truth is {np.argmax(y)}")
            plt.imshow(x.reshape(28, 28), cmap="gray_r")
            plt.show()
            wrong += 1
            if (wrong > max_wrong-1):
                break
    except KeyboardInterrupt:
        print("stopping")
        break
test 115: prediction = 9, truth is 4
../_images/58d940405d5b2d97a8ac4387c0747f19aceabb1b381df374961dae4b6e883716.png
test 149: prediction = 3, truth is 2
../_images/e89ded31bb0805eaed85d00c64bd2ea93dcb6c61745dc293d035532da31af2ac.png
test 247: prediction = 6, truth is 4
../_images/95b9f0fd23894c2cbbb25bb94ff4162bea2142c17024708eb2e068cc777e852f.png
test 321: prediction = 7, truth is 2
../_images/ffee7b61de1ff038024f9ad240685159d4c292312da298aac782027770fecb9c.png
test 340: prediction = 3, truth is 5
../_images/c8c2834b4172a70240f93d1cb14ae0d552f4a26654861da536d16eea043dd641.png
test 445: prediction = 0, truth is 6
../_images/99aa1a1124655bc04ed0c253cede4ee4f50d860b4a8e1e8796107a753cbcfabf.png
test 495: prediction = 0, truth is 8
../_images/ae7d94ffa26d5baa2e15a13dae0847ac6a63895412a6d03f86c08f8e3f328f37.png
test 582: prediction = 2, truth is 8
../_images/33080619ca831dc4a962e00d235d22b7840db996fcbf78452a7c4a9c7b934226.png
test 613: prediction = 8, truth is 2
../_images/e8fcb33912198404e43a9829e9f7bd807269f3a99cda7610f7fac6df54f49af5.png
test 646: prediction = 6, truth is 2
../_images/636857cadff1acef2aa145c156a25e58a6d145860be88a309a2b0803c0ace6fa.png

Experimenting#

There are a number of things we can play with to see how the network performance changes:

  • batch size

  • adding or removing hidden layers

  • changing the dropout

  • changing the activation function

Callbacks#

Keras allows for callbacks each epoch to store some information. These can allow you to, for example, plot of the accuracy vs. epoch by adding a callback. Take a look here for some inspiration:

https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/History

Going Further#

Convolutional neural networks are often used for image recognition, especially with larger images. They use filter to try to recognize patterns in portions of images (A tile). See this for a keras example:

https://www.tensorflow.org/tutorials/images/cnn