When studying new improvements to neural networks, many people run into the following problem:
How do these unfamiliar mathematical concepts relate to neural networks?
This blog overcomes this problem by explaining many required prerequisites from scratch1, and relating them to neural network applications. It focuses on providing intuition via visuals and animation, much like in resources such as 3Blue1Brown.
Not sure where to start? Start with the video below:
Why do Neural Networks use Linear Algebra? – The Visual Intuition of Cat Mathematics
NOTE: Ch 1 (along with many other chapters) is outdated and will be updated later. It is recommended to watch this video instead, which encapsulates the same concepts as Ch 1 but in a better way.
CHAPTER 1: How Neural Networks Reveal Hidden Insights in Data (using Matrix Multiplication)
1.1: How does Matrix Multiplication Guess it’s a Cat from its Face and Body?
1.2: Beware of False Friends in the Matrix?
1.3: Why is the Dot Product used in Matrix Multiplication?
CHAPTER 2: How Neural Networks Make Faces Look Younger (using Vector Projection)
2.0: Face Filters, Anime Filters, and Fake People
2.1: Changing Features using Vector Addition
2.2: Conditioning on Features using Orthogonal Projection
2.3: Scoring Semantics using Hyperplanes
CHAPTER 3: How Neural Networks Find the Most Important Features in Data (using Eigenvectors)
3.1: Paper Explanation: GANspace
3.3: Paper Explanation: Eigenfaces
Future topics:
Appendices:
Assuming only that a reader has a basic understanding of a foundational topic, or has watched certain videos. Links will be provided to these videos. ↩