The Backpropagation Algorithm-PART(1): MLP and Sigmoid
What is this post about? The training process of deep Artificial Neural Networks (ANNs) is based on the backpropagation algorithm. Starting with this post, and…
All theories and concepts of machine learning are found here.
What is this post about? The training process of deep Artificial Neural Networks (ANNs) is based on the backpropagation algorithm. Starting with this post, and…
What Will You Learn? In our previous post, we have talked about the meaning of gradient descent and how it can help us update the…
The Gradient Descent Rule https://www.youtube.com/watch?v=gYqG4OT2Kj4 When training a model, we strive to minimize a certain error function (). This error function gives us an indication…
What We Have Learned So Far … So far, we have learned that the Delta rule guarantees to converge to a model that fits our…
The Beauty that is the Delta Rule In general, there are 2 main ways to train an Artificial Neural Network (ANN). In our previous post…
The Perceptron Training Rule It is important to learn the training process of huge neural networks. However, we need to simplify this by first understanding…
A Quick Recap Hello everyone and welcome! In our previous post, we talked about the first algorithm that uses the “more-general-thank-or-equal-to” operation to smooth out…
A Quick Recap on our Last Post In our last post, we talked about the more-general-than-or-equal-to operation, which we denoted with ≥g and we said that in…
General-to-Specific Ordering of Hypotheses In the last post, we said that all concept learning problems share 1 thing in common regarding their structure, and it…
Introduction If we really wanted to simplify the whole story behind “Learning” in machine learning, we could say that a machine learning algorithm strives to…