Latest Article: Understanding Gradient Descent
Welcome to my first static page! Since I specialize in ML, here is a quick look at the cost function optimization:
$$ \theta_{j} := \theta_{j} - \alpha \frac{\partial}{\partial \theta_{j}} J(\theta_0, \theta_1) $$
I am currently building this site to share my insights on neural networks and advanced mathematics.