tensorflow tutorial - cover

Tensorflow Tutorial : Part 3 -Building your first model

Spread the love

In this multi-part series, we will explore how to get started with tensorflow. This tensorflow tutorial will lay a solid foundation to this popular tool that everyone seems to be talking about. The third part is a tensorflow tutorial on building a our first prediction model using tensorflow.

This series is excerpts from a Webinar tutorial series I have conducted as part of the United Network of Professionals. Time to time I will be referring to some of the slides that I used there as part of the talk to make it clearer.

Please don’t miss out on the live webinars where I talk about everything I write. Register yourself with our upcoming webinars to know about the topics we will be discussing.

This post is the first part of the multi-part series on a complete tensorflow tutorial –

Tensorflow Tutorial : Goals

tensorflow tutorial

Computation Graph

We continue our discussion to building our first linear regression model with tensorflow to get the basics right. After this we will jump straight to neural networks and deep learning implementations. We are revisiting the computation graph that we have discussed in this context.

tensorflow tutorial computation graph

We want to come up with a model that can predict house prices. The objective is to not build a great house price predictor, but to use a simple case study to warm up to the tensorflow environment so we can concentrate on deep learning models in the upcoming posts.

The computation graph for this will be constructing a linear equation graph so that the size (independent variable) gets multiplied by a factor (to be computed) and added to an offset (also to be computed). Take a look at the above diagram to visualize what we are trying to do.

Predicting House Prices

Well, all the definitions and terminologies are in place. We have also generated data and normalized them in the previous post.

All aboard to get started!

Setup Tensorflow variables

#  Set up the TensorFlow placeholders that get updated as we descend down the gradient
tf_house_size = tf.placeholder("float", name="house_size")
tf_price = tf.placeholder("float", name="price")

# Define the variables holding the size_factor and price we set during training.  
# We initialize them to some random values based on the normal distribution.
tf_size_factor = tf.Variable(np.random.randn(), name="size_factor")
tf_price_offset = tf.Variable(np.random.randn(), name="price_offset")

Here we setup the right tensorflow variables. If you recall from the types of tensors, we set up placeholders for holding our data – the house size and the house price. Then we initialize tensorflow variables for the factor and offset – the weights of this computation graph that we are going to vary.

Implementing the graph in tensorflow

# 2. Define the operations for the predicting values - predicted price = (size_factor * house_size ) + price_offset
#  Notice, the use of the tensorflow add and multiply functions.  These add the operations to the computation graph,
#  AND the tensorflow methods understand how to deal with Tensors.  Therefore do not try to use numpy or other library 
#  methods.
tf_price_pred = tf.add(tf.multiply(tf_size_factor, tf_house_size), tf_price_offset)

Here we see the tf_size_factor and tf_house_size are going through tf.multiply and the output is getting added to tf_price_offset – adhering to the computation graph listed above.

Cost Function

The cost function is defined according to this computation graph. This is a way to compute errors on our tensorflow model.

tensorflow tutorial loss computation graph

# 3. Define the Loss Function (how much error) - Mean squared error
tf_cost = tf.reduce_sum(tf.pow(tf_price_pred-tf_price, 2))/(2*num_train_samples)

We take the difference between the prediction and the ground truth, square it up and sum it up across all data points. This is a typical cost function used in linear regression.

Define the optimizer

We set the gradient descent optimizer for learning the weights with a learning rate of 0.1

# Optimizer learning rate.  The size of the steps down the gradient
learning_rate = 0.1

# 4. define a Gradient descent optimizer that will minimize the loss defined in the operation "cost".
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(tf_cost)

Learning weights

We finally give this bad boy a run by stepping through the iterations and calling these modules over and over till the error is optimized –

# keep iterating the training data
    for iteration in range(num_training_iter):

        # Fit all training data
        for (x, y) in zip(train_house_size_norm, train_price_norm):
            sess.run(optimizer, feed_dict={tf_house_size: x, tf_price: y})

        # Display current status
        if (iteration + 1) % display_every == 0:
            c = sess.run(tf_cost, feed_dict={tf_house_size: train_house_size_norm, tf_price:train_price_norm})
            print("iteration #:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(c), \
                "size_factor=", sess.run(tf_size_factor), "price_offset=", sess.run(tf_price_offset))
            # Save the fit size_factor and price_offset to allow animation of learning process
            fit_size_factor[fit_plot_idx] = sess.run(tf_size_factor)
            fit_price_offsets[fit_plot_idx] = sess.run(tf_price_offset)
            fit_plot_idx = fit_plot_idx + 1

    print("Optimization Finished!")
    training_cost = sess.run(tf_cost, feed_dict={tf_house_size: train_house_size_norm, tf_price: train_price_norm})
    print("Trained cost=", training_cost, "size_factor=", sess.run(tf_size_factor), "price_offset=", sess.run(tf_price_offset), '\n')

We run the tensorflow session object by passing all the definitions we have done before. Every alternate iteration will get printed where the cost is reported. This helps in monitoring the smooth decrease in the cost function.

Plot the output

Finally, we will use some matplotlib to visualize what we did –

# 
    # Plot another graph that animation of how Gradient Descent sequentually adjusted size_factor and price_offset to 
    # find the values that returned the "best" fit line.
    fig, ax = plt.subplots()
    line, = ax.plot(house_size, house_price)

    plt.rcParams["figure.figsize"] = (10,8)
    plt.title("Gradient Descent Fitting Regression Line")
    plt.ylabel("Price")
    plt.xlabel("Size (sq.ft)")
    plt.plot(train_house_size, train_price, 'go', label='Training data')
    plt.plot(test_house_size, test_house_price, 'mo', label='Testing data')

    def animate(i):
        line.set_xdata(train_house_size_norm * train_house_size_std + train_house_size_mean)  # update the data
        line.set_ydata((fit_size_factor[i] * train_house_size_norm + fit_price_offsets[i]) * train_price_std + train_price_mean)  # update the data
        return line,
 
     # Init only required for blitting to give a clean slate.
    def initAnim():
        line.set_ydata(np.zeros(shape=house_price.shape[0])) # set y's to 0
        return line,

    ani = animation.FuncAnimation(fig, animate, frames=np.arange(0, fit_plot_idx), init_func=initAnim,
                                 interval=1000, blit=True)

    plt.show() 

Conclusion

In the next post we will build our first neural network in TensorFlow model.

Please don’t miss out on the live webinars where I talk about everything I write. Register yourself with our upcoming webinars to know about the topics we will be discussing. Happy Coding!

In the next part, we will finally be ready to train our first neural network model in tensorflow on image data for image recognition. It will give us our first real hands on experience with neural networks using tensorflow!

I am embedding the original presentation below –

Author: vivekkalyanarangan@gmail.com

Data Scientist, Blogger, Guitar Player and geeks out on new technology through and through