Linear Regression, Mathematically Explained

AI, But Simple Issue #26

Hello from the AI, but simple team! If you enjoy our content, consider supporting us so we can keep doing what we do.

Our newsletter is no longer sustainable to run at no cost, so we’re relying on different measures to cover operational expenses. Thanks again for reading!

Linear Regression, Mathematically Explained

AI, But Simple Issue #26

In this issue, we'll explore Linear Regression, one of the simplest and most accessible algorithms in data science and machine learning.

Linear regression is a type of supervised machine learning algorithm, meaning that the data fed to the algorithm is labeled (there are true target values along with their inputs).

  • Regression means to predict a numerical value instead of a class, like in classification.

Supervised machine learning algorithms rely on mathematical transformations using linear algebra and calculus to analyze patterns in labeled data.

These algorithms (we’ll call them models for now) are fed input data and are asked to produce relevant outputs.

  • For instance, if we’re given the square footage of a house, could the model predict a reasonable price for the house?

The training occurs when the model’s parameter values are changed, leading to a better fit line or curve (which gives more accurate predictions).

In the training process, the model uses what’s known as a loss function. Loss functions represent the distance or inaccuracy of the model’s prediction compared to the actual value.

The model determines the best fitting line, plane, or curve by optimizing the loss function through different optimization algorithms, minimizing the loss it produces.

So, if the loss is high, then the numerical distance between the predicted value and the actual value is large, which is bad.

If the loss is low, the distance between the predicted value and the actual value is small, which is good.

If you have an interest in AI topics outside of deep learning and machine learning, feel free to check TheTechOasis out:

Sponsored
TheTechOasisThe newsletter to stay ahead of the curve in AI

Before getting into the mathematics of linear regression, please read the issue below on optimization algorithms to further understand loss, cost, and objective functions, along with how optimization works in general:

Basic linear algebra knowledge (matrix multiplication, transpose) and a basic knowledge of multivariable calculus (partial derivatives, gradients, chain rule, matrix calculus) are also recommended.

Subscribe to keep reading

This content is free, but you must be subscribed to AI, But Simple to continue reading.

Already a subscriber?Sign In.Not now