Welcome to Our Blog
![](https://www.mldawn.com/wp-content/uploads/2022/01/Prof.-Friston-Interview-300x169.png)
An Interview with Prof. Karl Friston
Karl Friston is a theoretical neuroscientist and authority on brain imaging. He invented statistical parametric mapping (SPM), voxel-based morphometry (VBM)
![](https://www.mldawn.com/wp-content/uploads/2021/08/Insight-101-Talk-300x169.png)
A Gentle 101 Talk on Artificial Neural Networks
What is this post about? In this super gentle 101 talk given at the Insight Centre for Data Analytics at
![sigmoid and its derivative](https://www.mldawn.com/wp-content/uploads/2021/06/sigmoid_derivative_together-300x145.png)
The Backpropagation Algorithm-PART(1): MLP and Sigmoid
What is this post about? The training process of deep Artificial Neural Networks (ANNs) is based on the backpropagation algorithm.
![](https://www.mldawn.com/wp-content/uploads/2021/06/Desceptively-simple-Relu-300x169.png)
Dissecting Relu: A desceptively simple activation function
What is this post about? This is what you will be able to generate and understand by the end of
![](https://www.mldawn.com/wp-content/uploads/2021/05/Mikhail-Belkin-300x169.jpg)
An interview with Prof. Mikhail Belkin
We never Had Truly Understood the Bias-Variance Trade-off!!! In this interview with Prof. Mikhail Belkin, we will discuss his amazing
![](https://www.mldawn.com/wp-content/uploads/2021/04/double-descent-300x114.png)
Reconciling modern machine learning practice and the bias-variance trade-off
What is This post about? The interview with the lead author of the paper: Prof. Mikhail Belkin Together we will
![](https://www.mldawn.com/wp-content/uploads/2021/03/sss-300x169.jpeg)
An interview with Prof. Tom Mitchell
“There are a lot more papers written than there are widely read!” (Tom Mitchell) Prof. Tom Mitchell is one of
![](https://www.mldawn.com/wp-content/uploads/2020/08/gradients_per_loss_surface-300x145.png)
Stochastic Approximation to Gradient Descent
What will you learn? The video below, delivers the main points of this blog post on Stochastic Gradient Descent (SGD):
![](https://www.mldawn.com/wp-content/uploads/2020/05/backprop-softmax-cross-9-300x169.png)
Back-propagation with Cross-Entropy and Softmax
What will you learn? This post is also available to you in this video, should you be interested 😉 https://www.youtube.com/watch?v=znqbtL0fRA0&feature=youtu.be