### Implementation

numpy basic operation: https://docs.scipy.org/doc/numpy-dev/user/quickstart.html#basic-operations

cvxopt to solve quadratic programming: http://cvxopt.org/userguide/coneprog.html#quadratic-programming

using

`numpy.matrix`

is not recommended generally: http://stackoverflow.com/questions/3890621/how-does-multiplication-differ-for-numpy-matrix-vs-array-classes/3892639#3892639confusing matrix representation:

```
>>> np.array([1, 2, 3]).transpose()
array([1, 2, 3])
>>> np.array([[1, 2, 3]]).transpose()
array([[1],
[2],
[3]])
>>> np.array([[4], [5], [6]]).dot(np.array([[1, 2, 3]]))
array([[ 4, 8, 12],
[ 5, 10, 15],
[ 6, 12, 18]])
```

### Theoretic Interests

TODO: the below discussions are OPPOSITE (necessity and sufficient conditions)

##### Lagrange Multiplier

I wrote (informal and simple) proof of the theorem about one way implication between original problem and "_Lagrange-multiplier_ed" problem. In simple term, we can say two things:

- if you find an answer in converted problem, that must be an answer in original problem,
- but, there might be a problem where you can not find an answer in converted problem even though the answer exists in original problem.

##### Karushâ€“Kuhnâ€“Tucker Conditions

This technique expands application of *Lagrange Multiplier* so that it includes inequality constraints. but, of course, this requires condition for objective function or others.

The optimization problem appearing in finding SVM can be written very simple equation and inequation, which satifies those condition, so it can be solved by this technique.

Intuitively, inequality can be deal with same way as equality case since anyway extreme point exists on the boundary of area made by inequality. (maybe this is too rough.)

TODO: write down problem definition for the SVM case (mimization)

### Future Work

- follow proof of necessity implication in
*Lagrange Multiplier*and*KKT conditions*