• AI, But Simple
  • Posts
  • Neural Networks: Learning Parameters (Weights, Biases, and Backprop)

Neural Networks: Learning Parameters (Weights, Biases, and Backprop)

AI, but simple issue #2

Neural Networks: Learning Parameters (Weights, Biases, and Backprop)

AI, But Simple Issue #2

Last week we discussed how a neural network works in very simple terms.

  • (If you haven’t read it, you can find it here)

Today, we’re going to add on to that concept.

So how do neural networks actually learn?

When data is fed through a neural network, its weights and biases affect how the data influences the final output of the model.

  • The output of a neural network can vary for different cases. For instance, for classification problems, it may output a vector of probability values of each class.

But how are these weights and biases actually tweaked?

We use something called an error gradient.

Subscribe to keep reading

This content is free, but you must be subscribed to AI, But Simple to continue reading.

Already a subscriber?Sign In.Not now