Consider the two-dimensional XOR dataset D which contains the following 4 examples with binary labels D ([0,0], −1), ([1, 1], −1), ([0, 1], +1), ([1,0], +1), i.e., the truth table of the logical XOR function. x1 22 y 0 0 -1 0 1 +1 1 0 +1 1 1 -1 ⚫ Q7.1 Prove that D is not linearly separable. (Hint: Show that there cannot be a vector of parameters w such that wTx > 0 for all examples x that are positive, and wTx < 0 for all examples x that are negative. Do not forget to add the bias feature 20 = 1 to each example x.) • Q7.2 Add a third feature to each example in the dataset that is the product of the two original features, i.e. for each example x the new feature is computed as 23 = 21x2. Is the new dataset linearly separable? Prove your answer. Let w be the current vector of parameters in the Perceptron algorithm, right before executing the if statement on line 4 (the algorithm is on page 4 of Neural Networks slides), and t is the label for Xn- Q8.1 Prove that if the algorithm executes the if clause on line 5, i.e. the model made a mistake on example xn, then it is true that twTx ≤0. ⚫ Q8.2 Let w be the new weight vector after executing the if clause on line 5, i.e. w = w+tnxn. Prove that twxn≥tnwTxn-
Consider the two-dimensional XOR dataset D which contains the following 4 examples with binary labels D ([0,0], −1), ([1, 1], −1), ([0, 1], +1), ([1,0], +1), i.e., the truth table of the logical XOR function. x1 22 y 0 0 -1 0 1 +1 1 0 +1 1 1 -1 ⚫ Q7.1 Prove that D is not linearly separable. (Hint: Show that there cannot be a vector of parameters w such that wTx > 0 for all examples x that are positive, and wTx < 0 for all examples x that are negative. Do not forget to add the bias feature 20 = 1 to each example x.) • Q7.2 Add a third feature to each example in the dataset that is the product of the two original features, i.e. for each example x the new feature is computed as 23 = 21x2. Is the new dataset linearly separable? Prove your answer. Let w be the current vector of parameters in the Perceptron algorithm, right before executing the if statement on line 4 (the algorithm is on page 4 of Neural Networks slides), and t is the label for Xn- Q8.1 Prove that if the algorithm executes the if clause on line 5, i.e. the model made a mistake on example xn, then it is true that twTx ≤0. ⚫ Q8.2 Let w be the new weight vector after executing the if clause on line 5, i.e. w = w+tnxn. Prove that twxn≥tnwTxn-