You are given 6 training examples for a binary classification problem as follows:

X1 X2 Y
10 0 +
0 -10 +
5 -2 +
5 10 -
0 5 -
5 5 -

Plot these points in a 2-dimensional grid and show the decision boundary resulting from 1-Nearest Neighbor. Hint: To get started, connect each point with the closest point from the opposite class. Then, for each of the 5 line segments that you drew in this way, draw the perpendicular bisector. The decision boundary consists of certain line segments of these bisectors.

Respuesta :

Answer:To simplify the discussion, we will only consider two-class classifiers in this section and define a linear classifier as a two-class classifier that decides class membership by comparing a linear combination of the features to a threshold.

Figure 14.8: There are an infinite number of hyperplanes that separate two linearly separable classes.

\includegraphics[width=6cm]{vclassline.eps}

In two dimensions, a linear classifier is a line. Five examples are shown in Figure 14.8 . These lines have the functional form $w_1x_1+w_2x_2=b$. The classification rule of a linear classifier is to assign a document to $c$ if $w_1x_1+w_2x_2>b$ and to $\overline{c}$ if $w_1x_1+w_2x_2\leq b$. Here, $(x_1, x_2)^{T}$ is the two-dimensional vector representation of the document and $(w_1, w_2)^{T}$ is the parameter vector that defines (together with $b$) the decision boundary. An alternative geometric interpretation of a linear classifier is provided in Figure 15.7 (page [*]).

We can generalize this 2D linear classifier to higher dimensions by defining a hyperplane as we did in Equation 140, repeated here as Equation 144:

\begin{displaymath}

\vec{w}^{T}\vec{x} = b

\end{displaymath} (144)

The assignment criterion then is: assign to $c$ if $\vec{w}^{T}\vec{x} > b$ and to $\overline{c}$ if $\vec{w}^{T}\vec{x} \leq b$. We call a hyperplane that we use as a linear classifier a decision hyperplane .

Figure 14.9: Linear classification algorithm.

\begin{figure}\begin{algorithm}{ApplyLinearClassifier}{\vec{w},b,\vec{x}}

score ...

...in{IF}{score>b}

\RETURN{1}

\ELSE

\RETURN{0}

\end{IF}\end{algorithm}

\end{figure}

The corresponding algorithm for linear classification in $M$ dimensions is shown in Figure 14.9 . Linear classification at first seems trivial given the simplicity of this algorithm. However, the difficulty is in training the linear classifier, that is, in determining the parameters $\vec{w}$ and $b$ based on the training set.

Explanation: