Recurrent Neural Networks

2 / 2

Recurrent Neural Networks - Session 02


No hints are availble for this assesment

Answer is not availble for this assesment

Please login to comment

32 Comments

Hi,

At slide no. 107, what's the purpose of using fully connected linear layer without activation function?

  Upvote    Share

Hi,

Good question!

You can go through the below link which discusses this in detail:

https://stats.stackexchange.com/questions/361066/what-is-the-point-of-having-a-dense-layer-in-a-neural-network-with-no-activation

Thanks.

  Upvote    Share

This comment has been removed.

sir it would be good if LSTM is explained using real world problem so that the concept of LSTM is clear . The LSTM section is not elaborate

  Upvote    Share

Hi,

Thank you for your feedback. We will definitely consider this while we are updating our courseware.

Thanks.

  Upvote    Share

i am facing frequent interruption of jupyter notebook

  Upvote    Share

Hi,

Is this related to the hourly rain gauge project?

Thanks.

  Upvote    Share

In case of example Training RNN to Predict Time Series (slide 88-98), I am not able to understand which train data  we are feeding to the RNN.....there is no step to import train and test data

 

when I try to execute below step, I am getting error, please guide

 

n_iterations = 10000

batch_size = 50

with tf.Session() as sess:
    
    init.run()
    
    for iteration in range(n_iterations):
    
        X_batch, y_batch = next_batch(batch_size,n_steps) # fetch the next training batch
        
        sess.run(training_op, feed_dict={X: X_batch, y:y_batch})
        
        if iteration % 100 == 0:
            mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
            
            print(iteration, "\tMSE:", mse)
    
    saver.save(sess,'model_ckps/my_time_series_model')  

 

Error:

 

  Upvote    Share

This comment has been removed.

error

 

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-8-1a38cc2b0f27> in <module>
      9     for iteration in range(n_iterations):
     10 
---> 11         X_batch, y_batch = next_batch(batch_size,n_steps) # fetch the next training batch
     12 
     13         sess.run(training_op, feed_dict={X: X_batch, y:y_batch})

NameError: name 'next_batch' is not defined

 

  Upvote    Share

Hi,

next_batch is a method to be used on datasets like mnist we import from libraries. Please refer to slide 85 to get a picture of how to use it. Hope this helps.

Thanks.

  Upvote    Share

Thanks for the reply, .....

yes, I already done with MNIST example you mentioned...I am talking about next topic of 'Training to Predict Time Series".....please see slide 97, in this case, there is no dataset is imported but still we are running RNN....I could not get that part...when I tried same code which is shown in the video, I am getting above error..please guide

  Upvote    Share

sorry, got my mistake....resolved..I had missed one step which is there in the video but not there in the slides...I saw the video earlier and then simply followed the slides during practice to save the time...

 

below code to create time series is not there in slides....

 

 

t_min, t_max = 0, 30
resolution = 0.1

def time_series(t):
    return t * np.sin(t) / 3 + 2 * np.sin(t*5) + np.random.random(1)[0]/2

def next_batch(batch_size, n_steps):
    t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
    Ts = t0 + np.arange(0., n_steps + 1) * resolution
    ys = time_series(Ts)
    return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)

 

  Upvote    Share

This comment has been removed.

Hi,

I don't know your code, however, you can go through the below link for an explanation and a probable solution:

https://stackoverflow.com/questions/42616625/valueerror-tensor-must-be-from-the-same-graph-as-tensor-with-bidirectinal-rnn-i

Thanks.

  Upvote    Share

Thanks Rajtilak, It was an issue that one of the cell wasn't activated.

  Upvote    Share

During the end part, slides related to Creative RNN, Deep RNN, LSTM, GRU are missing.

  Upvote    Share

Hi,

They have more of a practical approach, so would suggest you to try the notebooks instead.

Thanks.

  Upvote    Share

Hello Sir

This is right but the slides shown in video has content which are empty in the pdf. Please update.

Regards..

  Upvote    Share

Hi,

Thank you for your feedback. We will definitely consider when we will updated these slides.

Thanks.

  Upvote    Share

This comment has been removed.

Please explain the below code 

  • for iteration in range(mnist.train.num_examples // batch_size):

We have not defined mnist.train.num_examples anywhere but suddenly in the session we used it so is it a predefined function por what is it please explain.

and 

  • sess.run(training_op, feed_dict={X: X_batch, y: y_batch})

In the above line please explain the working of feed_dict={X: X_batch, y: y_batch})

 

 

THANKYOU..

  Upvote    Share

Hi,

The Tensorflow load() function returns a tuple containing dataset information. num_examples is one of those information. You can find more about it here:

https://www.tensorflow.org/datasets/api_docs/python/tfds/load

feed_dict is a dictionary of X_batch and y_batch.

Thanks.

  Upvote    Share

sorry but i did not find anything in the link that you have provided about my query...

  Upvote    Share

Hi,

The load() function returns ds_info, or tfds.core.DatasetInfo, if with_info is True, then tfds.load will return a tuple (ds, ds_info) containing dataset information (version, features, splits, num_examples,...). Note that the ds_info object documents the entire dataset, regardless of the split requested. Split-specific information is available in ds_info.splits.

You can add another cell just below the code and print mnist.train.num_examples. Hope this helps.

Thanks.

  Upvote    Share

Hi Swapna, and Dr. Khan,

Taking into consideration your feedback, we have updated our slides. Please check and let me know if you can find all the details now.

Thanks.

  Upvote    Share

Hi,
Could you please explain parameters in next_batch(batch_size,n_steps) and its output?

  Upvote    Share

Hi,

This trains the dataset in batches where we are mentioning the batch size, the number of steps it is taking to select the next batch.

Thanks.

-- Rajtilak Bhattacharjee

  Upvote    Share

Hi,

I am not able to understand tf.nn.in_top_k() properly.

Please help or give me a slide or link to study. I have googled it but not find any article which clears my doubt.

  Upvote    Share

Hi,

This defines whether the targets are in the top k predictions or not. You can find more about it here:

https://www.tensorflow.org/... https://docs.w3cub.com/tens...

Thanks.

-- Rajtilak Bhattacharjee

  Upvote    Share

Why are the slides so less. pls update the latest slides, many slides of neural network are not full

  Upvote    Share

Hi,

Taking into consideration your feedback, we have updated our slides. Please check and let me know if you can find all the details now.

Thanks.

  Upvote    Share