# Predicting images using the Fashion MNIST dataset ###### Lima Vallantin
Data scientist, Master's student, and interested in everything concerning Data, Natural Language Processing, and modern web.

#### Don't forget to share:

For this second challenge day, let’s explore the Fashion MNIST dataset.

During the next days, I will explore Tensorflow for at least 1 hour per day and post the notebooks, data and models to this repository.

The notebook for the second day is available here.

## Step 1: Import the data

``````# import libraries
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt``````

Import the dataset.

``````# import the fashion MNIST dataset
mnist = tf.keras.datasets.fashion_mnist

# get  the train and test sets
(train_imgs, train_labels), (test_imgs, test_labels) = mnist.load_data()``````

The labels on the dataset are written from 0 to 9. Let’s create “readable” labels for them.

``````# Check the labels on the train set.
list(set(train_labels))

# set the labels
labels = ['Top', 'Pants', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

# Check the shapes
print('There are {} images on this set.'.format(train_imgs.shape))
print('The images are in the {} x {} pxs format.'.format(train_imgs.shape, train_imgs.shape))``````

Let’s plot one image to see what it looks like.

``````# inspect one image
plt.figure()
plt.imshow(train_imgs)
plt.colorbar()
plt.grid(False)
plt.show()``````

Every pixel on the image vary from 0 to 255 pixels. To speed training time, we need to preprocess the images and place every pixel into a 0 to 1 scale.

Let’s replot the same image after normalization.

``````train_imgs = train_imgs / 255.0
test_imgs  = test_imgs / 255.0

# reinspect the same image
plt.figure()
plt.imshow(train_imgs)
plt.colorbar()
plt.grid(False)
plt.show()``````

## Build the model

The first layer receives the images. Its input shape must correspond to the same shapes of the images.

The Flatten layer transforms the image into a single dimension vector.

The intermediary Dense layer is the hidden layer, which activation function is ReLu.

The last Dense layer will output the probabilities of an image being of one of the 10 classes. Its activation function is Softmax.

``````model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])``````

Now, it’s time to compile the model. This step will configure the model before training. The main parameters are:

• The loss function: will measure the precision of the model during training
• Optimiser: defines the gradient descent method.
• Metrics: used to monitor the accuracy of the model.
``````model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])``````

Now, it’s time to train. During training, the model will receive the training set (train_imgs) and the labels and will try to learn from them.

``````model.fit(train_imgs,
train_labels,
epochs=10)``````

To evaluate how the model is performing, we test it using a set of data it never saw before. We use the test_imgs set for that. The result is a percentage value. In this case, we achieved an accuracy of 88%.

You will notice that the accuracy during training phase is higher than when we measure it on a separate set. This is due to overfitting: the model learns very well over the images it knows already, but it’s not very good at predicting new information on the same way.

A good exercise for the next days is finding ways to decrease overfitting.

``````test_loss, test_acc = model.evaluate(test_imgs, test_labels, verbose=2)
print('\nModel accuracy: {:.0f}%'.format(test_acc*100))``````

## Making predictions

To predict new images, we call the method “predict”. It will predict all the images present on a set at once. The result is an array of arrays.

Each one of these arrays contains 10 floats. Each number corresponds to the probability of an image being of a certain label.

``predictions = model.predict(test_imgs)``

If you analyse the first prediction, you will realize that the model thinks that this image is an Ankle Boot (ankle boots correspond to the label number 9).

``````print('The models thinks that the index {} has the highest probability.'.format(np.argmax(predictions)))
print('The index {} corresponds to the label \'{}\'.'.format(np.argmax(predictions), labels[np.argmax(predictions)]))``````

Let’s predict all the images at once and get the first 5 predictions.

``````predictions_labels = [labels[np.argmax(p)] for p in predictions]
predictions_labels[:5]``````

Let’s plot this image to see if the prediction is.

``````# reinspect the same image
plt.figure()
plt.imshow(test_imgs)
plt.colorbar()
plt.grid(False)
plt.show()``````

## Conclusion: what we learned today

1. How to use a Tensorflow dataset
2. The ReLu and Softmax activation functions
3. A little bit about Dense layers
4. What’s overfitting
5. Training a model and making predictions