Day 6 and 7: Binary Classification

Hello Everyone!

Hope you are tuning your learning rates well as per learning demands of the Deep Learning Bootcamp.:smile: We have released our Day 6 and Day 7 learning units on the Bootcamp platform.
In these units, we have covered aspects related to classification, activation functions, error functions etc. Most importantly, you will be learning to build your first Deep Learning model. We have two intuitive notebooks that would help ease the process.

  • What all were released? - Learning units for Day 6 and 7, Quiz on Regression
  • Where can it be accessed? -You can access the content through your dashboard: https://dphi.tech/dashboard/

Feel free to ask any questions here.

Happy learning and coding!

2 Likes

Could you help me better understand when I have this kind of results please.

In the notebook the graphs for the loss and accuracy have these results: “Look how the accuracy is slowly increasing and the loss slowly decreasing. Interesting, right” which in my opinion indicates that the model is working well

But when I compile again without making changes to the code I get the following results (sometimes when I compiled again if I got a similar result to the notebook that was shared with us).

Please help me with the following questions:

why even when I use the following seed code: “tensorflow.random.set_seed (seed_value)” the results vary?

When I have a graph like the one I obtained as I should proceed? since to me with epocas = 200, batch size = 10 and learning rate = 0.001 I obtained a graph that in my opinion does not indicate a good model, but in the notebook that was shared with those same parameters the graph has a good behavior.

I decreased the learning rate to 0.0001 and increased the number of epochs and got this result:

Why do I get this result?

When I kept compiling with the 0.0001 learning rate the results were in most cases similar to the following.

Hi, @richard_ramos. It is a very interesting question, the one you are pointing out. When you execute the following line history = model.fit(X_train, y_train, validation_split=0.2, epochs=200, batch_size=10, verbose=1) twice, you are re-fitting the model with the same training set. The model will be learning from further samples, but since it is learning from the same observations, it will tend to overfit, which is the case you present on the graph. If you execute the instruction once again, the blue line will be going away from the orange line. Everytime you want to change something in your model’s architecture, you should compile it again, including the model’s definition instructions.

1 Like