So far, I'm really liking this article on gradient descent. Bravo for giving both a clear overall picture _and_ getting into the math equations! In particular I love your analogy about being caught in a dark fog with only the ability to physically feel the direction of steepest descent (not being able to see the bottom of the valley)

Nevertheless the first code snippet (which is for Batch Gradient Descent) doesn't run as-is, because X_b isn't defined. Did you intend to include something like

X_b = np.c_[ np.ones(m), X]

right before the for-loop? (As you know, this upgrades the 1-D array to 2-D by prepending a columns of 1s)

With inclusion of this line, the code runs and seems to produce appropriate results.

**The sixth in a series explaining lowess and B-spline smoothing**

Part 6 has now been wrapped into a single six-part blog post:

**The fifth in a series explaining lowess and B-spline smoothing**

Part 5 has now been wrapped into a single six-part blog post:

**The fourth in a series explaining lowess and B-spline smoothing**

Part 4 has now been wrapped into a single six-part blog post:

**The 3rd in a series explaining lowess and B-spline smoothing**

Part 3 has now been wrapped into a single six-part blog post:

**The second in a series explaining lowess and B-spline smoothing**

Part 2 is now consolidated into a single 6-part blog post published in *Towards Data Science*