What is this post about?
In this super gentle 101 talk given at the Insight Centre for Data Analytics at University College Dublin , I have gone through some of the most mportant aspects of Artificial Neural Networks (ANNs):
Here are what you will learn in this talk along with the corresponding time-stamps:
00:33 Table of contents
03:04 The biological motivation
03:32 The big picture of what we expect from an ANN
04:30 An actual nervous cell and an aritificial perceptron: Some similarities
10:37 A giant biological neural net and a giant artificial neural network: A face off!
14:28 ANNs as Truth approximators: What is the Truth?
16:21 ANNs as Truth approximators: Examples of such Truth approximations
17:49 How well/poorly is my ANN performing? The idea of Error functions
20:54 The training cycle of an ANN: Epochs, Backpropagation and tuning the parameters of an ANN
25:27 What isthe backpropagation algorithm? What is a gradient? How does an ANN use them to tune its parameters?
28:56 Concluding thoughts

Author: Mehran
Dr. Mehran H. Bazargani is a researcher and educator specialising in machine learning and computational neuroscience. He earned his Ph.D. from University College Dublin, where his research centered on semi-supervised anomaly detection through the application of One-Class Radial Basis Function (RBF) Networks. His academic foundation was laid with a Bachelor of Science degree in Information Technology, followed by a Master of Science in Computer Engineering from Eastern Mediterranean University, where he focused on molecular communication facilitated by relay nodes in nano wireless sensor networks. Dr. Bazargani’s research interests are situated at the intersection of artificial intelligence and neuroscience, with an emphasis on developing brain-inspired artificial neural networks grounded in the Free Energy Principle. His work aims to model human cognition, including perception, decision-making, and planning, by integrating advanced concepts such as predictive coding and active inference. As a NeuroInsight Marie Skłodowska-Curie Fellow, Dr. Bazargani is currently investigating the mechanisms underlying hallucinations, conceptualising them as instances of false inference about the environment. His research seeks to address this phenomenon in neuropsychiatric disorders by employing brain-inspired AI models, notably predictive coding (PC) networks, to simulate hallucinatory experiences in human perception.
Responses