Lima Vallantin
Marketing Data scientist and Master's student interested in everything concerning Data, Text Mining, and Natural Language Processing. Currently speaking Brazilian Portuguese, French, English, and a tiiiiiiiiny bit of German. Want to connect? Envie uma mensagem. Quer saber mais sobre mim? Visite a página "Sobre".


Não se esqueça de compartilhar:

Compartilhar no linkedin
Compartilhar no twitter
Compartilhar no facebook

Não se esqueça de compartilhar:

Compartilhar no linkedin
Compartilhar no twitter
Compartilhar no whatsapp
Compartilhar no facebook

Transfer learning is a way to reuse already trained models to increase the performance of a new model being trained. Today, we will explore this concept.

BUT FIRST, it’s time for a brief reflexion about this challenge.

When I decided to start the #100DaysOfTensorflow challenge, I had two main goals: to discover features about this ecosystem that I still didn’t know and to not “forget” what I had already learned.

However, I don’t consider the way as the #100DaysOf**Something** optimal. Sometimes, we keep doing things that don’t “connect” just for the sake of doing them.

While I believe this may help us to remember automatic things – such as “Dense layers path is tk.keras.layers.Dense” – I think that analytical thinking requires more work and deeper analysis of problems.

Also, learning Tensorflow is great, but Machine Learning and even data analysis is not about all this tool. There are simpler and faster solutions to solve data related problems.

So, I will be changing the format of this challenge in the following way:

  • Instead of #100DaysOfTensorflow, let’s call this challenge #100DaysOfData
  • I will still continue to code everyday, but I may not publish complete code on a daily bases. In my opinion, rush is the greatest enemy of good analysis. So, instead, I’d rather commenting what I have done for the day.

That being said… During the next days, I will explore Tensorflow data for at least 1 hour per day and post the notebooks, data and models, when they are available, to this repository.

Today’s notebook is available here.

Let’s start!

# do imports
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds

Get the examples from the “Cats vs. dogs” dataset.

  • Train: 80%
  • Validation: 10%
  • Test: 10%

The images contain images with different shapes and 3 channels.

(raw_train, raw_validation, raw_test), metadata = tfds.load(
    split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],

First thing we will do is to resize all images, so they have a 100 x 100 size. Tensorflow official example uses 160 x 160, but I would like to experiment with smaller values to check the impact of this change.

IMG_SIZE = 100 # All images will be resized to 160x160

def format_example(image, label):
  image = tf.cast(image, tf.float32)
  image = (image/127.5) - 1
  image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
  return image, label

# apply to dataset
train =
validation =
test =

# shuffle the dataset and batch the data

train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
validation_batches = validation.batch(BATCH_SIZE)
test_batches = test.batch(BATCH_SIZE)

Create the base model using pre-trained convnets

The base model used here comes from Ternsorflow official examples and uses the MobileNet V2 model developed at Google.

According to them, “this is pre-trained on the ImageNet dataset, a large dataset consisting of 1.4M images and 1000 classes”.


# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,

We have to “freeze” the convolutional base created before to use it as a feature extractor. Then, we add a classifier on top of it and train the top-level classifier. To freeze the model, we set the trainable flag to “False”.

base_model.trainable = False

# check model

To generate predictions, we use GlobalAveragePooling2D layer and a Dense layer to convert features into a single prediction per image.

# add layer to create features
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()

# add a prediction layer
prediction_layer = tf.keras.layers.Dense(1)

# create model
model = tf.keras.Sequential([

# compile
base_learning_rate = 0.0001

# see summary

# train model
initial_epochs = 10

history =,

Check the loss and the accuracy.

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.title('Training and Validation Loss')

Não se esqueça de compartilhar:

Compartilhar no linkedin
Compartilhar no twitter
Compartilhar no whatsapp
Compartilhar no facebook

Deixe uma resposta