In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variables) denoted X.

For multivariate linear regression, given the following function, our job is to find the right 𝜃.

$$h_𝜃(x) = 𝜃_0+𝜃_1x_1+𝜃_2x_2+…+𝜃_nx_n$$

For convenience of notation, define $x_0 = 1$.

$$ X=\begin{bmatrix}x_0 \\ x_1 \\ x_2 \\ \vdots \\ x_n \\ \end{bmatrix} ∈ R^{n+1} \qquad 𝜃=\begin{bmatrix}𝜃_0 \\ 𝜃_1 \\ 𝜃_2 \\ \vdots \\ 𝜃_n \\ \end{bmatrix} ∈ R^{n+1}$$

So

$$h_𝜃(x) = 𝜃^TX$$

Hypothesis:

$$h_𝜃(x)=𝜃^TX=𝜃_0+𝜃_1x_1+𝜃_2x_2+…+𝜃_nx_n$$

Parameters:

$$𝜃_0, 𝜃_1,…, 𝜃_n$$

Cost function:

### Gradient Descent

Repeat {

} (simultaneously update for every j = 0,…,n)

The formula above is equivalent to the following one:

### Feature Scaling

The idea is to make sure features are on a similar scale, so that we can get a optimal value faster.

There are many ways to do it, one common method is mean normalization:

$$x_i = \frac{x_i - \mu_i}{s_i}$$

$\mu_i$ is the average value of $x_i$ in training set

$s_i$ is the range of $x_i$,that is

maximum value of $x_i − minimum$ value of $x_i$

$s_i$ also can be the standard deviation

### Learning Rate α

$$ 𝜃_j := 𝜃_j - α\frac{\partial}{\partial 𝜃_j}J(𝜃) $$

If you choose the correct α, cost function $J(𝜃)$ should decrease after each iteration.

If gradient descent is not working and your code are correct, then you should use smaller α

For sufficiently small α, $J(𝜃)$ should decrease on every iteration.

But if α is too small, gradient descent can be slow to converge. So we need to manually try the value of learning rate to find a better one.

### Normal Equation

Normal equation: Method to solve for 𝜃 analytically.

$$𝜃 = (X^TX)^{-1}X^T \vec y$$

You might wonder, how did this formula come into being?

I will show you below:

$$\begin{align} X𝜃 & = \vec y \\

X^T(\vec y-X𝜃) & = 0 \\

X^T\vec y-X^TX𝜃 & = 0 \\

X^TX𝜃 & = X^T\vec y \\

𝜃 & = (X^TX)^{-1}X^T\vec y \end{align}$$

Normal equation is **slow** if $n$ is very large, for you have to compute $(X^TX)^{-1}$, which is $O(n^3)$.

### Algorithm Implementation

We will implement linear regression with multiple variables to predict the prices of houses. Suppose you are selling your house and you want to know what a good market price would be.

Suppose you have the file ex1data2.txt which contains a training set of housing prices in Portland, Oregon. The first column is the size of the house (in square feet), the second column is the number of bedrooms, and the third column is the price of the house.

**Linear_Regression_Gradient_Descent.m**

1 | %Linear regression with multiple variables |

**Normal_Equation.m**

1 | %Normal Equation |

The dataset **ex1data2.txt** is shown below:

1 | 2104,3,399900 |