The Intersection.

Linear- and Multiple Regression from scratch

Cover for Linear- and Multiple Regression from scratch
Philipp Muens
Philipp Muens

You can find working code examples (including this one) in my lab repository on GitHub.

Linear Regression is one of the basic Machine Learning algorithms every student eventually encounters when starting to dive deeper into the field. If you heard someone trying to "fit a line through the data" that person most likely worked with a Linear Regression model.

In which scenarios should we use Linear Regression and if we do, how do we find such a best-fitting line? And what if our data is multidimensional? Let's answer all those questions by implementing Linear and Multiple Regression from scratch!

Note: Throughout this post we'll be using the "Auto Insurance in Sweden" data set which was compiled by the "Swedish Committee on Analysis of Risk Premium in Motor Insurance".

Data-driven decision making

Let's take a step back for a minute and imagine that we're working at an insurance company which sells among other things car insurances.

Over the years the insurance company has collected a wide variety of statistics for all the insurances it sells including data for its car insurances. Some of this data is statistics about the number of filed claims and the payments which were issued for them. The following table shows an excerpt from such data:

Number of claims Payments issued
108 392,5
19 46,2
13 15,7
124 422,2
... ...

One day we get a call from our colleague who works at the claims settlement center. She has to plan the divisions budget for the upcoming year which is usually derived based on best guesses. Since we've collected all the data throughout the years she wonders if there's a more reliable, mathematical way we could use to calculate the budget estimation.

Nodding along we confirm that we'll dive deeper into this topic and hang up the telephone in sheer excitement! Finally we're able to put some real Machine Learning into practice. But how should we tackle the problem?

Data visualization

It's hard to gain any insights into the data we're dealing with by manually examining the raw numbers. In order to get a better understanding of the data it's always a good idea to visualize it first.

Given that we're dealing with 2 dimensions (the number of claims and the issued payments) one of the potential diagrams we can create is a so called scatter plot which uses (Cartesian) coordinates to display the values of a given data set. In our case we treat the number of claims as our xx-axis and the issued payments as our yy-axis and plot the data we recorded at the intersections of such axes which results in the following diagram:

Linear- and Multiple Regression 1

Solely by looking at the diagram we can already identify a trend in the data. It seems to be the case that the more claims were filed, the more payments were issued. Intuitively that makes sense.

After inspecting the plotted data in more detail we observe that we can certainly make some rough predictions for missing data points. Given the trend in the data it seems reasonable that we might issue ~80 payments when ~40 number of claims were filed. Accordingly ~125 claims might be filed when we issue ~410 payments.

Is there some way to turn this insight into a model we can use to make arbitrary predictions? It seems like the relationship in the data is linear. Is there a way to capture this notion mathematically?

Linear Functions

You might remember the concept of a Linear function from school where you've used the slope-intercept form (one of many forms) to mathematically describe a line:

y=mx+by = mx + b

The slope-intercept form has 2 parameters which determine how the line "behaves" in the Cartesian plane (The typical 2D plane with xx and yy coordinates):

  • mm is the lines slope which measures how steep the line is slanted
  • bb is the yy-intercept and determines at which point the line intercepts the yy-axis

Using this formula we can plug in any arbitrary xx value which is then multiplied by mm and added to bb to get back the corresponding yy value.

Let's look at a couple of examples to get a better feeling as to how the slope-intercept form works.

The slope m

Let's solely focus on mm for now and set bb to 00. How does mm influence the way our line will be plotted if we set it to 11?

Here's the mathematical representation of such a line followed by the corresponding plot:

y=1x+0y = 1x + 0

Linear- and Multiple Regression 2

As you can see for every step of size 11 in the xx direction we "go" a step of size 11 in the yy direction.

The y-intercept b

Now that we understand what the parameter mm is responsible for, let's take a look at the yy-intercept bb and set it to 11:

y=1x+1y = 1x + 1

Linear- and Multiple Regression 3

The steepness of the line is the same as the previous line since we haven't modified mm. However if you take a look at x=0x = 0 you'll notice that the line crosses the yy intercept at 11. That's exactly what the parameter bb is responsible for. Through bb we can control where our line should start on the yy axis when x=0x = 0.

Turning it into code

Let's translate the slope-intercept form into a function we call predict (we'll use this function for our predictions later on):

def predict(m: float, b: float, x: float) -> float:
    return m * x + b

assert predict(m=0, b=0, x=3) == 0

Applying what we've learned so far

Let's put the theory into practice and try to guesstimate a line which best describes our data.

The first thing we notice is that the individual data points follow an upwards trend, so mm will certainly be positive. Furthermore the data points close to x=0x = 0 seem to have low yy values as well. Taking those observations into account we guess the following description for our line:

y=4x+2y = 4x + 2

Linear- and Multiple Regression 4

Not too bad for our first guess! Just by looking at the plotted line we might ask ourselves if there's better fitting line? Can we quantify how good our line fits the data?

The error function

Given that the learning part in Machine Learning is usually an iterative process which starts with an initial guess and slowly computes new "best guesses" to finally converge to the optimal solution it's a necessity to be able to track the learning process.

A good way to supervise the learning process is to mathematically capture the "wrongdoing" our algorithm inevitably produces while trying to determine the function which best describes the data.

In the case of Linear Regression it seems to make sense to compare the yy-values the line produces to the actual yy-values from the data set. We could for example go through each individual (x,y)(x, y) pair in our data set and subtract its yy value from the yy value our line "predicts" for the corresponding xx. Summing up these differences results in a number we can use to compare different lines against each other. The higher the number, the "less correct" the line.

That's great but there's one minor catch. Imagine that the line which is fitted through the data predicts large positive yy values near the origin (0,0)(0, 0) where it should predict large negative numbers. At the same time it predicts large negative numbers near the end of the xx-axis although those values should be positive. If we calculate the errors according to our description above where we suggested to sum up the differences between the yy values we'd end up in a situation where values might cancel each other out. In the worst case the calculated error is 00 which indicates that we've found the best fitting line while in reality we didn't!

A simple trick to mitigate this problem is to square each single error value before they're summed up. This way any negative value will be turned into a positive one, making it impossible to run into scenarios where error calculations cancel each other out.

The error function we've just described is called Residual sum of squares (RSS) or Sum of squared errors (SSE) and is one of many error functions we can use to quantify the algorithms "wrongdoing". The following is the mathematical formula for SSE:

SSE=_i=1n(yif(xi))2SSE = \sum\_{i=1}^n (y_i - f(x_i))^2

Turing that into code results in the following:

def sum_squared_error(ys: List[float], ys_pred: List[float]) -> float:
    assert len(ys) == len(ys_pred)
    return sum([(y - ys_pred) ** 2 for y, ys_pred in zip(ys, ys_pred)])

assert sum_squared_error([1, 2, 3], [4, 5, 6]) == 27

Finding the best fitting line...

With those two code snippets (the predict and sum_squared_error functions) we're now able to describe a line, predict yy values and measure how "off" our predictions are. The last missing piece we'll need to get in place is a way to update our line description such that the next sum_squared_error calculation returns an error value which is less than our current one. If there's a way to constantly reduce the error we're making by slowly updating our line description we'll eventually end up with a line which best fits our data!

With Linear Regression there are a couple of different algorithms we can use to find the best fitting line. One prominent choice is the Ordinary least squares (OLS) method. Since OLS is a common choice we'll do something different. We'll use the Gradient Descent algorithm which can be used for various different optimization problems and is at the heart of modern Machine Learning algorithms.

Note: If you haven't already I'd suggest that you take a couple of minutes to read the article "Gradient Descent from scratch" in which I explain the whole algorithm in great detail.

... using Gradient Descent

I won't provide too many explanations regarding Gradient Descent here since I already covered the topic in the aforementioned post. Don't get too intimidated by the Math below. It's ok if you just skim through this section to get a high-level overview.

In a nutshell Gradient Descent makes it possible for us to iteratively "walk down" the error functions surface to eventually find a local minimum where the error is the smallest which is exactly what we're looking for.

In order to figure out in which direction we should walk to descent down to the local minimum we need to compute the so-called gradient. The gradient is a vector consisting of partial derivatives of our error function which point in the direction of greatest increase at any given point pp on our error functions surface.

To find the partial derivatives of our SSE function we should expand it so that we can see all the variables we need to take into account:

SSE=i=1n(yif(xi))2=i=1n(yi(mx+b))2SSE = \sum*{i=1}^n (y_i - f(x_i))^2 = \sum*{i=1}^n (y_i - (mx + b))^2

Looking at the expanded formula it seems like there's mm and bb we need to  derive with respect to:

ssem=2x((mx+b)y)\frac{\partial sse}{\partial m} = 2x ((mx + b) - y)
sseb=2((mx+b)y)\frac{\partial sse}{\partial b} = 2 ((mx + b) - y)

Which results in the following code:

# The partial derivative of SSE with respect to `m`
grad_m: float = sum([2 * (predict(m, b, x) - y) * x for x, y in zip(xs, ys)])
# The partial derivative of SSE with respect to `b`
grad_b: float = sum([2 * (predict(m, b, x) - y) for x, y in zip(xs, ys)])

Tip: You can use WolframAlpha to validate your partialderivatives.

Given these partial derivatives we can now calculate the gradient for any point xx which is a vector pointing in the direction of greatest increase. Multiplying the vector by 1-1 will let it point into the opposite direction, the direction of greatest decrease (remember that we want to find a local minimum). If we add a small fraction of this vector to our mm and bb values respectively we should end up closer to a local minimum.

The following code captures what we've just described:

# Take a small step in the direction of greatest decrease
# The `learning_rate` controls the step size when "walking" down the gradient
learning_rate: float = 0.0001

m = m + (grad_m * -1 * learning_rate)
b = b + (grad_b * -1 * learning_rate)

Repeating this process multiple times should help us find the mm and bb values for our line for which any given prediction yy calculated by that line results in the smallest error possible.

Let's put all the pieces together and implement the Gradient Descent algorithm to find the best fitting line:

# Find the best fitting line through the data points via Gradient Descent

# Our initial guess
m: float = 0
b: float = 200

print(f'Starting with "m": {m}')
print(f'Starting with "b": {b}')

# Doing 1000 iterations
epochs: int = 10000
learning_rate: float = 0.00001

for epoch in range(epochs):
    # Calculate predictions for `y` values given the current `m` and `b`
    ys_pred: List[float] = [predict(m, b, x) for x in xs]

    # Calculate and print the error
    if epoch % 1000 == True:
        loss: float = sum_squared_error(ys, ys_pred)
        print(f'Epoch {epoch} --> loss: {loss}')

    # Calculate the gradient
    # Taking the (partial) derivative of SSE with respect to `m` results in `2 * x ((m * x + b) - y)`
    grad_m: float = sum([2 * (predict(m, b, x) - y) * x for x, y in zip(xs, ys)])
    # Taking the (partial) derivative of SSE with respect to `b` results in `2 ((m * x + b) - y)`
    grad_b: float = sum([2 * (predict(m, b, x) - y) for x, y in zip(xs, ys)])

    # Take a small step in the direction of greatest decrease
    m = m + (grad_m * -learning_rate)
    b = b + (grad_b * -learning_rate)

print(f'Best estimate for "m": {m}')
print(f'Best estimate for "b": {b}')

Running this algorithm results in a best estimate for the mm and bb values. Let's compare our initial guess of mm and bb (the guess we started with at the top of the code snippet) with the values our Gradient Descent implementation produced:

Linear- and Multiple Regression 5

m=0,b=200m = 0, b = 200

Linear- and Multiple Regression 6

m3.40,b20.30m \approx 3.40, b \approx 20.30

Awesome! Seems like we've found our linear function which best describes our data! Let's call our co-worker and share the good news. From now on she can use the following formula to find a prediction for the issued payments (yy) based on any number of claims (xx):

y=3.40x+20.30y = 3.40x + 20.30

From Linear Regression to Multiple Regression

It's great to be able to fit a line through data points in 22 dimensions. But how do we deal with scenarios where our data has more than 22 dimensions?

Most data sets capture many different measurements which are called "features". It would be great if we could take the most important features into account when working with our algorithms. Given that every feature adds another dimension we need to ensure that the model we're building can deal with such high-dimensional data.

Our Linear Regression model was only able to take a single xx value and predict a corresponding yy value. What if we have multiple xx values? Is there a way to use a regression model to predict a yy value based on multiple xx values?

As it turns out Linear Regression is a subset of a general regression model called Multiple Linear Regression or Multiple Regression. Multiple Regression can deal with an arbitrary number of xx values expressed as a vector to predict a single yy value.

The great news is that we can easily adopt what we've learned so far to deal with high-dimensional data. Let's take a quick look at the changes we need to make.

Tweaking the Linear Function

The slope-intercept form we've used so far can easily be updated to work with multiple xx values. Here's the linear equation we've used so far:

y=mx+by = mx + b

Having multiple xx values means that we'll also have multiple mm values (one for each xx). However we'll still only deal with 11 intercept:

y= m1x1+...+mnxn+by =  m_1x_1 + ... + m_nx_n + b

Calculating a prediction for yy is as simple as solving the above equation for any given vector of xx values, vector of mm values and any given bb value.

There's one little trick we can apply given that we're now mostly dealing with vectors rather than scalar numbers. To make the computation more efficient we can use the dot-product which carries out almost the exact same calculation we described above. There's just one problem. The dot-product can only be used in vector calculations, however bb isn't a vector. As it turns out we can simply prepend the bb value to the mm vector and prepend a 11 to the xx vector. Doing this little trick makes it possible to use the dot-product calculation while also taking the bb value into account. Here's what we'd end up with when doing just that:

x=(1x1...xn)m=(bm1...mn)\vec{x} = \begin{pmatrix} 1 \\ x_1 \\ ... \\ x_n \end{pmatrix} \vec{m} = \begin{pmatrix} b \\ m_1 \\ ... \\ m_n \end{pmatrix}
y=xm=_i=1nximi=x1×m1+...+xn×mny = \vec{x} \cdot \vec{m} = \sum\_{i=1}^n x_i m_i = x_1 \times m_1 + ... + x_n \times m_n

Another nice side-effect of doing this is that the partial derivative calculations for the error function will also be easier since our usage of the dot-product reduced the number of variables we have to take into account to just 2 vectors xx and mm.

Fitting Hyperplanes

And that's pretty much all there is to change. The rest of the code follows exactly the same way. While we fitted a line when working with Linear Regression we're now fitting a so-called hyperplane with Multiple Regression.

To get a better intuition for the notion of a hyperplane imagine that we have measurements we can scatter plot in a 33 dimensional space. Every measurement will be a single dot in that space, resulting in a cloud of dots. Our Multiple Regression algorithm will now try to find a plane (think of it as a wooden plank) which best fits through that dot cloud.

Conclusion

Linear Regression is one of the very first algorithms every student encounters when learning about Machine Learning models and algorithms.

The basic idea of Linear Regression is to find an equation for a line which best describes the data points in the given data set. Such a line is often described via the point-slope form y=mx+by = mx + b. "Fitting the line" means finding the mm and bb values such that the resulting yy value is as accurate as possible given an arbitrary xx value. Using an error function (which describes how "off" our current line equation is) in combination with an optimization algorithm such as Gradient Descent makes it possible to iteratively find the "best fitting" line.

As it turns out Linear Regression is a specialized form of Multiple Linear Regression which makes it possible to deal with multidimensional data by expressing the xx and mm values as vectors. While this requires the usage of techniques such as the dot-product from the realm of Linear Algebra the basic principles still apply. In Multiple Linear Regression we're just trying to find a "best fitting" hyperplane rather than a line.

Do you have any questions, feedback or comments? Feel free to reach out via E-Mail or connect with me on Twitter.

Additional Resources

The following is a list with resources I've used while working on this blog post. Other useful resources are linked within the article itself.