What is Orthogonalization?
Orthogonalization is a method that calculates an orthonormal basis for the subspace spanned by a given set of vectors.
Given vectors [Tex]a_1,…,a_k[/Tex] in [Tex]R^n[/Tex], the orthogonalization process determines vectors [Tex]q_1,…,q_r[/Tex] in [Tex]R^n[/Tex] such that:
span{[Tex]a_1[/Tex],…,[Tex]a_k[/Tex]}=span{[Tex]q_1[/Tex],…,[Tex]q_r[/Tex]}
Here, r represents the dimension of the subspace S.
Additionally, the resulting vectors [Tex]q_i[/Tex]satisfy the following conditions:
[Tex]q_{i}^{T} q_{j} = 0 [/Tex]for [Tex]i\ne j [/Tex]
[Tex]q_{i}^{T} q_{i} = 1 [/Tex]for [Tex]1\leq i, j \leq r [/Tex]
In other words, the vectors ([Tex]q_i[/Tex],…,[Tex]q_r[/Tex]) constitute an orthonormal basis for the subspace spanned by [Tex]a_1, …, a_k[/Tex].
Orthogonalization in Machine Learning
Orthogonalization is a concept of linear algebra which aims to simplify the complexity of machine learning models, making them easier to understand, debug and optimize. In the article, we are going to explore the fundamental concept of orthogonalization, orthogonalization techniques and it’s application in machine learning.