GET STARTED
by Pierce on March 30, 2017

Intro to TensorBoard

Intro to TensorBoard

Now that we’re constantly validating the data and saving our model, we can start thinking of ways to visualize the ins and outs of our model or ways to do exploratory data analysis of our model while or after it is done training. In Dandelion Mane’s talk at the TensorFlow Dev Summit 2017, he described it as a flashlight to shine on the black box of deep neural networks. Sometimes shining a bright light is ill-advised.

Lucky for us, deep neural networks won’t turn into gremlins if you shine a bright light on them.

The Code

We are going to start with the code from here as our starting point. We are already monitoring and checkpointing as output to the command line, now we are going to visualize what we are seeing and add some more informative visualizations.

We Already Have a Lot

We already have a lot of information for TensorBoard to use. TensorFlow’s tf.contrib.learn module actually creates a lot of output that TensorBoard is expecting. If you are using the code from 02-monitoring-and-checkpointing and have already run the model to completion, all you need to do is run:

<code class="hljs dos"><span class="hljs-built_in">tensorboard --logdir=output/</span></code>

All this code does is tells TensorBoard to look at the location where we saved our models. Another thing to note is that TensorBoard opens on port 6006. To see how you can open TensorBoard on our AMIs, you can look at our user guide. Once everything is setup and working, you should see the following output:

Tensorboard

Great! Now we can start exploring TensorBoard. Some things to point out:

  • “Loss” and “accuracy” are from our monitor from our monitoring and checkpointing post
  • You can see a visualization of your neural network in the “Graphs” tab

We highly recommend that you take some time and explore the TensorBoard. We’ll wait here.

Adding TensorBoard Visualizations

Done? Ok, lets keep going. In this section we are going to show how to change some of the code that we have to add visualizations to your TensorBoard. Some of what follows might not be the most effective way to visualize the data. We will visit TensorBoard again after covering the sytax of the base TensorFlow language and will clean up some parts of the TensorBoard. If anyone finds resources that help make the "Graph" visualization better, please comment below and/or open up a pull request. You might have noticed that the “Distributions” and “Histograms” sections had nothing in them. What we want to add are visualizations of the weights, biases, and activations. To do this, we replace the following code (the model definition):

# Model Definition

def fully_connected_model(features, labels):
   labels = tf.one_hot(tf.cast(labels, tf.int32), 10, 1, 0)

   layer1 = tf.layers.dense(features, 512, activation=tf.nn.relu, name='fc1')
   layer2 = tf.layers.dense(layer1, 128, activation=tf.nn.relu, name='fc2')
   logits = tf.layers.dense(layer2, 10, activation=None, name='out')

   loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))

   train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss,
                 tf.contrib.framework.get_global_step())

   return tf.argmax(logits, 1), loss, train_op

With this:

# Model Definition
def fully_connected_model(features, labels):
  
   labels = tf.one_hot(tf.cast(labels, tf.int32), 10, 1, 0)
   tf.summary.image('input', tf.reshape(features, [-1, 28, 28, 1]), 10)

   with tf.name_scope('fc1'):
       layer1 = tf.layers.dense(features, 512, activation=tf.nn.relu, name='fc1')
       fc1_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'fc1')
       tf.summary.histogram('kernel', fc1_vars[0])
       tf.summary.histogram('bias', fc1_vars[1])
       tf.summary.histogram('act', layer1)
  
   with tf.name_scope('fc2'):
       layer2 = tf.layers.dense(layer1, 128, activation=tf.nn.relu, name='fc2')
       fc2_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'fc2')
       tf.summary.histogram('kernel', fc2_vars[0])
       tf.summary.histogram('bias', fc2_vars[1])
       tf.summary.histogram('act', layer2)

   with tf.name_scope('out'):
       logits = tf.layers.dense(layer2, 10, activation=None, name='out')
       out_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'out')
       tf.summary.histogram('kernel', out_vars[0])
       tf.summary.histogram('bias', out_vars[1])
       tf.summary.histogram('act', logits)

   with tf.name_scope('loss'):
       loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits,
         labels=labels))
       tf.summary.scalar('loss', loss)
  
   with tf.name_scope('train_op'):
       train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss,
           tf.contrib.framework.get_global_step())

   return tf.argmax(logits, 1), loss, train_op

This is pretty dense, so let’s step through what we did.

  • We added a tf.summary.image function call to the input image. This added 10 random samples of the input image to the Image tab in the TensorBoard
  • We added a tf.summary.histogram for the kernel (weights), biases, and activations for each layer in the network. We got the weight and bias variables out of the tf.GraphKeys.TRAINABLE_VARIABLES from the scope name of the layer. This will add summaries of these variables to the Distributions and Histograms tabs of the TensorBoard.
  • We used tf.summary.scalar to add the training loss to the “scalars” tab on TensorBoard
  • We named all variables using “with tf.name_scope()” which makes interpretation easier in the “Graph” tab of the TensorBoard

All of these changes are reflected in the model.py file and the jupyter notebook. We leave it to you to train the model (as shown in the previous post), pointing TensorBoard to the new directory using the CLI as shown previously in this post, and exploring all the great information and visualizations!

Summary and Next Steps

What we did here is added a lot of details to the TensorBoard that we can observe while the model trains. TensorBoard can be used as a debugging tool or a very easy way to see what the model is learning. While this is not very important for this small example, when the training times increase, this is a great tool. TensorBoard by default updates every 120 seconds so you can have a tab open that monitors your model as it trains on another server or locally. One thing that I think is a shortcoming that we will address later, is the fact that the “Graph” tab is still a little messy. Due to a lot of the tf.contrib.learn calls, some of the details of the graph get a little convoluted. We will address this later on. Another thing to look forward to in the next post is how we can use the TensorBoard to visualize the activations of convolutional layers as the model trains. Hope to see you next week!

Topics: tutorial

Search

New Call-to-action

Tell The Reader More ?

Get our regular technology news, insights, tutorials, and more!