# Plot loss and accuracy of a trained model ###### Wilame
Marketing Data scientist and Master's student interested in everything concerning Data, Text Mining, and Natural Language Processing. Currently speaking Brazilian Portuguese, French, English, and a tiiiiiiiiny bit of German. Want to connect? Tu peux m'envoyer un message. Pour plus d'informations sur moi, tu peux visiter cette page.

#### N'oublies pas de partager :

For today’s challenge, we will plot the loss and the accuracy of a model trained on Tensorflow.

During the next days, I will explore Tensorflow for at least 1 hour per day and post the notebooks, data and models to this repository.

Today’s notebook is available here.

## Plot loss and accuracy of a trained model

One of the tasks you have to knowhow to execute is plotting the accuracy and the loss of a trained model.

Today, we will explore one way of doing it.

## Redo the pre-processing and model training.

``````# imports
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
import matplotlib.pyplot as plt

import numpy as np
import pandas as pd
import io

# get data
!wget --no-check-certificate \

# define get_data function
def get_data(path):
return data

#get the data
data = get_data('/tmp/sentiment.csv')

# clone package repository
!git clone https://github.com/vallantin/atalaia.git

# navigate to atalaia directory
%cd atalaia

# install packages requirements
!pip install -r requirements.txt

# install package
!python setup.py install

# import it
from atalaia.atalaia import Atalaia

#def pre-process function
def preprocess(panda_series):
atalaia = Atalaia('en')

# lower case everyting and remove double spaces
panda_series = (atalaia.lower_remove_white(t) for t in panda_series)

# expand contractions
panda_series = (atalaia.expand_contractions(t) for t in panda_series)

# remove punctuation
panda_series = (atalaia.remove_punctuation(t) for t in panda_series)

# remove numbers
panda_series = (atalaia.remove_numbers(t) for t in panda_series)

# remove stopwords
panda_series = (atalaia.remove_stopwords(t) for t in panda_series)

# remove excessive spaces
panda_series = (atalaia.remove_excessive_spaces(t) for t in panda_series)

return panda_series

# preprocess it
preprocessed_text = preprocess(data.text)

# assign preprocessed texts to dataset
data['text']      = list(preprocessed_text)

# split train/test
# shuffle the dataset
data = data.sample(frac=1)

# separate all classes present on the dataset
classes_dict = {}
for label in [0,1]:
classes_dict[label] = data[data['sentiment'] == label]

# get 80% of each label
size = int(len(classes_dict.text) * 0.8)
X_train = list(classes_dict.text[0:size])      + list(classes_dict.text[0:size])
X_test  = list(classes_dict.text[size:])       + list(classes_dict.text[size:])
y_train = list(classes_dict.sentiment[0:size]) + list(classes_dict.sentiment[0:size])
y_test  = list(classes_dict.sentiment[size:])  + list(classes_dict.sentiment[size:])

# Convert labels to Numpy arrays
y_train = np.array(y_train)
y_test = np.array(y_test)

# Let's consider the vocab size as the number of words
# that compose 90% of the vocabulary
atalaia    = Atalaia('en')
vocab_size = len(atalaia.representative_tokens(0.9,
' '.join(X_train),
reverse=False))
oov_tok = "<OOV>"

# start tokenize
tokenizer = Tokenizer(num_words=vocab_size,
oov_token=oov_tok)

# fit on training
# we don't fit on test because, in real life, our model will have to deal with
# words ir never saw before. So, it makes sense fitting only on training.
# when it finds a word it never saw before, it will assign the
# <OOV> tag to it.
tokenizer.fit_on_texts(X_train)

# get the word index
word_index = tokenizer.word_index

# transform into sequences
# this will assign a index to the tokens present on the corpus
sequences = tokenizer.texts_to_sequences(X_train)

# define max_length
max_length = 100

# post: pad or truncate after sentence.
# pre: pad or truncate before sentence.
trunc_type='post'

maxlen=max_length,
truncating=trunc_type)

# tokenize and pad test sentences
# thse will be used later on the model for accuracy test
X_test_sequences = tokenizer.texts_to_sequences(X_test)

maxlen=max_length,
truncating=trunc_type)

# create the reverse word index
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])

# create the decoder
def text_decoder(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])``````

## Build and compile the model

``````# Build network
embedding_dim = 100

model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(100, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1, activation='sigmoid')
])

When training the model, assign the training process to the variable history.

``````# train the model
num_epochs = 10
y_train,
epochs=num_epochs,
batch_size=32,
validation_split=0.2,
shuffle=True)
``````

History holds information about the loss and the accuracy captured during training.

``````# list all data in history
print(history.history.keys())``````

## Plotting accuracy

If you are doing everything right, you will notice that the accuracy for test and train will go up until stabilize. We had so few epochs that the model didn’t have the time to overfit.

``````# Accuracy history
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()``````

Now, we can check the loo. The loss should go down for train and test.Notice how it doesn’t happen for the test set, indicating that our model has room to improvement.

If we print the accuracy, we can see that it’s very low, only 79%.

``````accuracy = model.evaluate(X_test_padded, y_test)
print('Model accuracy is {:.2f}%'.format(accuracy*100))

>> 13/13 [==============================] - 0s 3ms/step - loss: 0.8449 - accuracy: 0.7975
>> Model accuracy is 79.75%``````