9
$\begingroup$

I am a computer science research student working in application of Machine Learning to solve Computer Vision problems.

Since, lot of linear algebra(eigenvalues, SVD etc.) comes up when reading Machine Learning/Vision literature, I decided to take a linear algebra course this semester.

Much to my surprise, the course didn't look at all like Gilbert Strang's Applied Linear algebra(on OCW) I had started taking earlier. The course textbook is Linear Algebra by Hoffman and Kunze. We started with concepts of Abstract algebra like groups, fields, rings, isomorphism, quotient groups etc. And then moved on to study "theoretical" linear algebra over finite fields, where we cover proofs for important theorms/lemmas in the following topics:

Vector spaces, linear span, linear independence, existence of basis. Linear transformations. Solutions of linear equations, row reduced echelon form, complete echelon form,rank. Minimal polynomial of a linear transformation. Jordan canonical form. Determinants. Characteristic polynomial, eigenvalues and eigenvectors. Inner product space. Gram Schmidt orthogonalization. Unitary and Hermitian transformations. Diagonalization of Hermitian transformations.

I wanted to understand if there is any significance/application of understanding these proofs in machine learning/computer vision research or should I be better off focusing on the applied Linear Algebra?

  • 0
    Strang's books are great for applied math and machine learning people. However, there is some value in understanding the abstract vector space viewpoint, because many spaces we care about (in engineering applications) are not subspaces of $\mathbb{R}^n$. For example, we care about spaces of matrices, and spaces of symmetric matrices. You can map these spaces to subspaces of $\mathbb{R}^n$, but it's not elegant and (in my opinion) not as clear.2012-11-09

3 Answers 3

5

If you want to do advanced computer vision, and not just implement algorithms, you will need to understand advanced algebraic concepts for linear transformations. You will also need to understand a bit of measure theory and analysis.

Why?

Because research level computer vision involves the development of algorithms. The development of these algorithms necessarily invokes the structural properties of the mathematical objects; properties such as measure, convergence, isometry, isomorphism, etc.

Furthermore, say you have the mechanical skills to develop a computational method. Any true research-level effort is also expected to demonstrate a proof of convergence, establish a domain in which the method is efficacious, compare the method to prior methods, and fundamentally compare the weaknesses and benefits.

This requires at least a solid understanding of graduate-level analysis and linear algebra.

0

From my personal experience, i think the most important topics are Probability, Statistics and Matrix Algebra. Of course, the basics of Linear Algebra are also required, but i guess that goes without saying.

The topics which you mentioned in the course of linear algebra, they can be good to know. For example, there are many unsolved problems in current methods of machine learning. If you have strong foundation of linear algebra, may be you can come up with solution to such existing problems.

Hope this helps.

0

Personally, linear algebra will be very important for your research in machine learning. Because it provides not only the best interpretaion of many real problems, but also gives the easy solutions to these problems, such as linear regression model and linear classification model. In addition, with the knowledge of linear algebra, it will be easier to study other courses including statistics which is important for machine learning.

Maybe, I can recommend a book for you on linear algebra, "Linear Algebra Done Right" edited by Sheldon Axler.

Best!