Backpropagation calculus | Deep learning, chapter 4


The hard assumption here is that you’ve watched part 3, giving an intuitive walkthrough of the backpropagation algorithm. Here, we get a bit more formal and dive into the relevant calculus. It’s normal for this to be a little confusing, so the mantra to regularly pause and ponder certainly applies as much here as anywhere else. Our main goal is to show how people in machine learning commonly think about the chain rule from the calculus in the context of networks, which has a different feel for how much most introductory calculus courses approach the subject. For those of you uncomfortable with the relevant calculus, I do have a whole series on the topic. Let’s just start off with an extremely simple network, one where each layer has a single neuron in it. So this particular network is determined by 3 weights and 3 biases, and our goal is to understand how sensitive the cost function is to these variables. That way we know which adjustments to these terms is going to cause the most efficient decrease to the cost function. And we’re just focus on the connection between the last two neurons. Let’s label the activation of that last neuron a with a superscript L, indicating which layer it’s in, so the activation of this previous neuron is a^(L-1). There are not exponents, they’re just a way of indexing what we’re talking about, since I want to save subscripts for different indices later on. Let’s say that the value we want this last activation to be for a given training example is y. For example, y might be 0 or 1. So the cost of this simple network for a single training example is (a^(L) – y)^2. We’ll denote the cost of this one training example as C_0. As a reminder, this last activation is determined by a weight, which I’m going to call w^(L) times the previous neuron’s activation, plus some bias, which I’ll call b^(L), then you pump that through some special nonlinear function like a sigmoid or a ReLU. It’s actually going to make things easier for us if we give a special name to this weighted sum, like z, with the same superscript as the relevant activations. So there are a lot of terms. And a way you might conceptualize this is that the weight, the previous activation, and the bias altogether are used to compute z, which in turn lets us compute a, which finally, along with the constant y, let us compute the cost. And of course, a^(L-1) is influenced by its own weight and bias, and such. But we are not gonna focus on that right now. All of these are just numbers, right? And it can be nice to think of each one as having its own little number line. Our first goal is to understand how sensitive the cost function is to small changes in our weight w^(L). Or phrased differently, what’s the derivative of C with respect to w^(L). When you see this “∂w” term, think of it as meaning “some tiny nudge to w”, like a change by 0.01. And think of this “∂C” term as meaning “whatever the resulting nudge to the cost is”. What we want is their ratio. Conceptually, this tiny nudge to w^(L) causes some nudge to z^(L) which in turn causes some change to a^(L), which directly influences the cost. So we break this up by first looking at the ratio of a tiny change to z^(L) to the tiny change in w^(L). That is, the derivative of z^(L) with respect to w^(L). Likewise, you then consider the ratio of a change to a^(L) to the tiny change in z^(L) that caused it, as well as the ratio between the final nudge to C and this intermediate nudge to a^(L). This right here is the chain rule, where multiplying together these three ratios gives us the sensitivity of C to small changes in w^(L). So on screen right now, there’s kinda lot of symbols, so take a moment to make sure it’s clear what they all are, because now we are gonna compute the relevant derivatives. The derivative of C with respect to a^(L) works out to be 2(a^(L) – y). Notice, this means that its size is proportional to the difference between the network’s output, and the thing we want it to be. So if that output was very different, even slight changes stand to have a big impact on the cost function. The derivative of a^(L) with respect to z^(L) is just the derivative of our sigmoid function, or whatever nonlinearity you choose to use. And the derivative of z^(L) with respect to w^(L), in this case comes out just to be a^(L-1). Now I don’t know about you, but I think it’s easy to get stuck head-down in these formulas without taking a moment to sit back and remind yourself what they all actually mean. In the case of this last derivative, the amount that a small nudge to this weight influences the last layer depends on how strong the previous neuron is. Remember, this is where that “neurons that fire together wire together” idea comes in. And all of this is the derivative with respect to w^(L) only of the cost for a specific training example. Since the full cost function involves averaging together all those costs across many training examples, its derivative requires averaging this expression that we found over all training examples. And of course that is just one component of the gradient vector, which itself is built up from the partial derivatives of the cost function with respect to all those weights and biases. But even though it was just one of those partial derivatives we need, it’s more than 50% of the work. The sensitivity to the bias, for example, is almost identical. We just need to change out this ∂z/∂w term for a ∂z/∂b, And if you look at the relevant formula, that derivative comes to be 1. Also, and this is where the idea of propagating backwards comes in, you can see how sensitive this cost function is to the activation of the previous layer; namely, this initial derivative in the chain rule expansion, the sensitivity of z to the previous activation, comes out to be the weight w^(L). And again, even though we won’t be able to directly influence that activation, it’s helpful to keep track of, because now we can just keep iterating this chain rule idea backwards to see how sensitive the cost function is to previous weights and to previous biases. And you might think this is an overly simple example, since all layers just have 1 neuron, and things are just gonna get exponentially more complicated in the real network. But honestly, not that much changes when we give the layers multiple neurons. Really it’s just a few more indices to keep track of. Rather than the activation of a given layer simply being a^(L), it’s also going to have a subscript indicating which neuron of that layer it is. Let’s go ahead and use the letter k to index the layer (L-1), and j to index the layer (L). For the the cost, again we look at what the desired output is. But this time we add up the squares of the differences between these last layer activations and the desired output. That is, you take a sum over (a_j^(L) – y_j)^2 Since there are a lot more weights, each one has to have a couple more indices to keep track of where it is. So let’s call the weight of the edge connecting this k-th neuron to the j-th neuron w_{jk}^(L). Those indices might feel a little backwards at first, but it lines up with how you’d index the weight matrix that I talked about in the Part 1 video. Just as before, it’s still nice to give a name to the relevant weighted sum, like z, so that the activation of the last layer is just your special function, like the sigmoid, applied to z. You can kinda see what I mean, right? These are all essentially the same equations we had before in the one-neuron-per-layer case; it just looks a little more complicated. And indeed, the chain-rule derivative expression describing how sensitive the cost is to a specific weight looks essentially the same. I’ll leave it to you to pause and think about each of these terms if you want. What does change here, though, is the derivative of the cost with respect to one of the activations in the layer (L-1). In this case, the difference is the neuron influences the cost function through multiple paths. That is, on the one hand, it influences a_0^(L), which plays a role in the cost function, but it also has an influence on a_1^(L), which also plays a role in the cost function. And you have to add those up. And that… well that is pretty much it. Once you know how sensitive the cost function is to the activations in this second to last layer, you can just repeat the process for all the weights and biases feeding into that layer. So pat yourself on the back! If this all of these makes sense, you have now looked deep into the heart of backpropagation, the workhorse behind how neural networks learn. These chain rule expressions give you the derivatives that determine each component in the gradient that helps minimize the cost of the network by repeatedly stepping downhill. Hhhhpf. If you sit back and think about all that, that’s a lot of layers of complexity to wrap your mind around. So don’t worry if it takes time for your mind to digest it all.

100 thoughts on “Backpropagation calculus | Deep learning, chapter 4

  • Two things worth adding here:
    1) In other resources and in implementations, you'd typically see these formulas in some more compact vectorized form, which carries with it the extra mental burden to parse the Hadamard product and to think through why the transpose of the weight matrix is used, but the underlying substance is all the same.

    2) Backpropagation is really one instance of a more general technique called "reverse mode differentiation" to compute derivatives of functions represented in some kind of directed graph form.

  • Mr 3Blue1Brown,
    Are you an alien from an advanced civilization from a faraway galaxy, send here to earth to enlighten us dumb earthlings on the idea that mathematics is truly beautiful, and actually can be learned by any one who cares to learn??
    If you are an alien, then please don't go back, stay here on earth as long as you can. For we surely need you here on earth to wake us out of our mathematical and scientific ignorance. Our educational systems here on earth has truly failed us and drive millions of us to dislike mathematics and science. You are our only hope!!!

  • I have never had more than high school calculus and I was able to follow this video, just by watching your partial derivative video and then this one. You are an absolute master of communication.

  • https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ is a godsend. This video explains the high level concepts, but fails to detail some of the underlying work that goes on. I spent at least an hour beating my head against a wall re-watching portions of this video while trying to figure out how to calculate `y` for hidden layers.

  • Great series! one question though, if I got it right, after this video we now know how to calculate the gradient vector (each component is calculated by averaging all the partial derivatives with respect to the corresponding variable in the cost function). My question is this, once we have the gradient vector, how exactly do we determine how to change the weights and biases to reduce the cost function?

    Any help would be appreciated.

  • This was incredible enlightening but I still don’t now where the variable ‘b’ for bias is coming from is that set at the beginning to correct for errors or…?

  • I noticed in one of the previous videos, you showed that this was just the beginning of a series of neural network topics, with convolutional neural networks and LSTM listed down the path.

    Do you plan on making a future video series on more specific types of neural networks like CNNs and LSTM?

  • Dude, first of all thank you for all of your videos. I've been watching you since 2018 and its beeing surprising how you teach and how easier (excep for some 4D sphere videos haha) it gets when you visualize things up not only in our minds but in fact in front of us. I was about to ask, have you though about subbing your videos in Portuguese/Brazilian? Here in Brazil theres a LOT of students and people interested on it ( IA, neural networks) and pure math. It'd be a lot easier (even than it is already) to understand if we could read in our own language. But you know, I learn a lot form you but I "learn in English". It is kinda tricky to "transpose" some content to portuguese because of the languages differences even talking about me to my self-teaching. Some terms are not imeadiatelly translatable (specially when talking about maths) and I've got to go to google translate instead if i want to write down some specific path or even to understand some specific statement you made in "math" language. Hope you got what I mean, sorry for some errors in grammar btw. It'd be a LOT helpful if you got to be able to subtitle (specifically those ones of Neural Networks) to portuguese – BR.

    Anyway, thanks again for the great content. bye bye

  • 十分感谢你为我们带来这么好的一个视频
    Thank you very much for bringing me such a good course.

  • At 5.09 change in a(L) / change in Z(L) should be equal to sigmoid why it giving derivative of sigmoid

  • Dude, how the fuck you do your animations and which program you use to create this videos??? Thanks for the explanation btw, and you are crazy xD

  • When propagating backwards do you just take the activation of the next layer as if they were y? Or do you have to add the notches or what?

  • 4:09 -> 4:11
    (a^L – y)^2 = 2(a^L – y) this is 'a huge' ;D mistake :[
    (a^L – y)^2 = (a^L – y)(a^L – y) it will work

  • I don't recall hearing this video say anything about what value we should modify any one weight and bias by. At least one single example using values would have been nice.

  • Thank you very much for your amazing explanations. I'd been buckling down a lot for understanding back propagation. While all the explanations I found failed to make me clearly understand the topic, your explanation just worked wonders! Thanks again for so nice videos!

  • I attempted to create my own neural network from scratch based on this video series… However this last video still jumps over a few bits which likely proved to be my downfall… since the neural network really just does not work unfortunately.

  • What an amazing video series. Everything was so well explained that I, having just learned this math, was able to follow along. I wish I could do more to support this channel.

  • I'd trained dozens of models in CNN. but now I've understand what I did there. Awesome content with such a beautiful soundtrack. 😊

  • Nice, but none of it makes it any more clear how I would write the code to do this. This did not even cover the adjustment of weights.

  • at 4:44 I have stil problem to calculate weight. I don't understand how calculate all delta for sum of "z(L)" Please help.

  • You are some sort of sorcerer. The chain rule was finally elucidated for me in about 20 seconds.
    So concise and intuitive indeed.
    That was one thing, for whatever reason, I couldn't get my head around as a younger student.
    Big props man. Love this channel.

  • I admire you!
    It would help a lot, if you tell us how you manage to gain such an understanding from books. By that I mean, what does it take to be able to make those videos? Does everything you know come from books? Do you have a good professor? Do you experiment with visualization tools to achieve the geometrical interpretation? Or are you only gifted to gain such kind of understanding?

  • Please make videos on deep reinforcement learning, Q-learning, DQN. It's the only channel that explains all the maths with the greatest visuals behind BP and basic DL.

  • That one of the worst ways to learn Backprop, the idea comes from chain Rule and the rest is linear algebra. better with the book in description.

  • Holy mother of God!!!
    How did I intuitively understand such complex thing!!!!??
    Grant,Sir!!! You are god to me

  • Hello. I have a question. I watched a lot of videos and I can't figure out a thing about neural networks.

    Is the BIAS common for all the neurons of a layer or does every neuron have it's own bias?

    In some schematics the bias is like a neuron with activation (1) and it has different weights when connecting to every neuron of the layer. In other schematics, the bias has one value as activation and there are no weights(so the bias is equal for all the neurons of that layer)
    Thank you very much!

  • how about simple without derivatives? just show the values how they are substracted andsuch

  • It has taken me about 3-4 days worth time to understand all of these 4 lectures, lectures which are in total, no longer than 1 hour and 30 minutes.

    And I feel proud.

  • So, I get that the desired output(y) for the layer of neurons in the output layer can either be a 0 or a 1. But, what is the desired output(y) when calculating the gradients for the second to last layer of neurons? What activation do we actually desire for the layer behind the activation layer?

  • One of the best lectures I have ever heard. Great explanation of NN, cost functions, activation functions etc. Now I understand NN far far better…(P.S. I saw previous videos Part 1, 2,3 as well)

  • Without this I never would've been able to make my first neural network, even though all it did was learn how to respond to Rock Paper Scissors when it already knows which one you're going to put (basically overfitting is the goal)

  • this is math at the best and art too at the highest. the grace of the animation, the subtle music, the perfectly paced narration and the wonderful colour scheme! math and art or let's say math is art!

  • Wow, so clear, thanks 😀
    I'm not sure why the derivative of z(L) is a(L-1) though :/
    Could anyone explain me? 🙂

  • It's great that you started it backwards but when I started to program it I realized something. What is a^(L-1) when computing the first layer? Is it simply the value from the inputs?

    z^((L))=w^((L)) a^((L-1))+b^((L))

    a^((L))=f(z^L)

  • Thank you so so so much for making this series. Within an hour, I feel that I have learned a good deal about Neural Networks. You are amazing!

  • Is there any learning material available on the internet for a simple neural net which goes step by step, computing actual values (cost functions, derivations) for all weights, biases and iterations? That would be very practical.

  • you are not a good teacher. You failed to extrapolate from the simple example to the general one. You should give an example with a 2 layer network with the cost function and activation function and then show, explicitly by writing down, how the derivatives go all the way back.

  • This video doesn't actually suggest how one chooses a value to add to the weights, and the propagation seems to move forward to the first layer only – how are alterations added to the second and third layers ?

  • At like 8:25, why is C0 being the sum going from j = 0 to nL-1 being the number of neurons of the last hidden layer, and then j being used in the output's subscript (Aj)? is that a mistake or am I missing something

  • So with stochastic gradient descent would you only change some of the weight values each iteration in the training phase.

  • Wow, this took a long time to get my head around fully, but I was finally able to understand it enough to implement my own version of backpropagation from scratch thanks to this video! Neural networks are something I've wanted to get into for a while and I'm really grateful for these wonderful in-depth explanations!

  • That unexplained little formula addition at the end 9:30 showing what the partial derivative of the cost function with respect to the current node and layer when the current layer is not the output later really messed with me. In typical notation, that's lower case Delta. Correct?

  • I have a problem with my neural net, which I build from scratch following these videos. Most of the time my net gives pretty sure answers like [0.999987, 0.000323] (until now I've only tested with self-created data like input [1,2,3,4] should give [1,0] as an output), but sometimes with different initializations of weights and biases the training ends with feedforward now giving some strange answers like [0.004, 0.000003]. There's still clear distinction with the probability of the right answer and the wrong answer, but it is nowhere near the optimal sure answer [1,0]. What is going on here? Is it that my gradient descent finds an local minimum which gives as an output the said [0.004, 0.000003] and gets stuck there? Is this a common problem with neural nets? Should I try to find initial configurations for which the local minimum gives the least error?

  • Hope to see soon one more episode on the same thing in matrix notation which would make it more sensible to relate to actual implementation

  • Many guys claim to know. Some guys actually know. But only one guy actually knows and can explain to his grandma as well with very beautiful animations. You are that ONE !!!

  • I wish I saw this video much earlier since I'm good at chain rule and also optimization problems. I attended a lecture on neural networks in 1984. I didn't really understand how one can determine weights without fitting. Looked to me like back propagation was a swindle until I saw this video. Now I can show my friends using a couple of lines on the whiteboard.

  • I have been trying to understand, how to host a 'hello_world' python server for about a week and I still don't understand. I has watched your 4 videos a few times and make my own neural network that can understand the world. Man, I wish people who teaches were at least 10% as good in teaching as you

Leave a Reply

Your email address will not be published. Required fields are marked *