Introduction to Linear Algebra(6) Orthogonal and Least Squares
作者:互联网
Orthogonal and Least Squares
Inner Product,Length, and Orthiginality
Two vectors u and v in Rn are orthogonal if u⋅v=0.
Two vectors u and v are orthogonal if and only if ∣∣u+v∣∣2=∣∣u∣∣2+∣∣v∣∣2.
Let A be an m×n matrix. The orthogonal complement of the row space of A is the null space of A, and teh orthogonal somplement of the column space of A is the null space of A⊥:(RowA)⊥=NulA,(ColA)⊥=NulA⊥
Orthogonal Sets
A set of vectors KaTeX parse error: Can't use function '\u' in math mode at position 14: \{u_1,\cdots,\̲u̲_p\} in Rn is said to be an orthogonal set if each pair of distinct vectors from theset is orthogonal, that is, if KaTeX parse error: Expected 'EOF', got '\cdotu' at position 4: u_i\̲c̲d̲o̲t̲u̲_j=0 whenever i̸=j.
Let {u1,⋯,up} be an orthogonal baisis for a subspace W of Rn. For each y in W, the weights in the lienar combinationy=c1u1+⋯+cpup
are given by cj=uj⋅ujy⋅uj(j=1,⋯,P)
An Orthogonal Projection
KaTeX parse error: Expected '}', got '\cdotu' at position 24: …proj_Ly=\frac{y\̲c̲d̲o̲t̲u̲}{u\cdotu}u
Orthonormal Sets
A set {u1,⋯,up} is an orthonormal set if it is an oorthogonal set of
An m×n matrix U has orthonormal columns if and only if UTU=I.
Let U be an m×n matrix with orthonormal columns, and let x and y be in Rn. Then
a.∣∣Ux∣∣=∣∣x∣∣
b.(Ux)⋅(uy)=x⋅y
c.(Ux)⋅(uy)=0 if and only if x⋅y=0
The best approsimation Theorem
Let W be a subspace of Rn, let y be any vector in Rn, and let y^ be the orthegonal projection of y onto W. Then y^ is the closet point in W to y, in the sense that ∣∣y−y^∣∣<∣∣y−v∣∣
for all v in W distinct from y^.
If {u1,⋯,up} is an orthonormal basis for a subspace W of Rn, thenprojwy=(y⋅u1)u1+(y⋅u2)u2+⋯+(y⋅up)up
If U=[u1u2⋯up],then projwy=UUTy for all y in Rn
The GRAM-SCHMIDT PROCESS
Given a basis x1,⋯,xp for a nonzero subspace W of Rn, definev1=x1 x2=x2−v1⋅v1x2⋅v1v1 v3=x3−v1⋅v1x3⋅v1v1−v2⋅v2x3⋅v2v2 ⋮ vp=xp−v1⋅v1xp⋅v1v1⋯−vp−1⋅vp−1xp⋅vp−1vp−1
Then v1,⋯,vp is an orthogonal basis for W. In additionSpan{v1,⋯,vk}=Span{x1,⋯,xk}for1≤k≤p
QR Factorization of Matrices
If A is an m×n matrix with linearly independent columns, then A cam ne factored as A=QR, where Q is an m×n matrix whose columns form an orthonormal basis for ColA and R is an n×n upper triangular invertible matrix with positive entries on its diagonal.
Least-Squares Problems
If A is m×n and b is in Rm, a least-squares solution of Ax=b ia an x^ in Rn such that:∣∣b−Ax^∣∣≤∣∣b−Ax∣∣
for all x in Rn.
Note that
Ax^ is in the ColA
Let A be an m×n matrix. THe following statements are logically equivalent:
a. The equation Ax=b has a unique least-squares solution for each b in Rm.
b. The columns of A are linearly independent.
c. The matrix ATA is invertible.
When these statments are true, the least-squares solution x^ is given by x^=(ATA)−1ATb
The set of least-squares solutions of Ax=b coincides with the nonempty set of solutions of the normal equations ATA=x=ATb.
Alternative Calculations of Least-Squares Solutions
given an m×n matrix A with linearly independent columns, let A=QR be a QR factorization of A. Then for each b in Rm, the equation Ax=b has a unique least-squares solution.given by x^=R−1QTb
Least-Squares Lines
For a system: predicted y-value: β0+β1x1 Observed y-value:y1
We can write this system as Xβ=, where X=, β=[β0,β1]
Inner Product spaces
An inner product on a vector space V is a function that, to each pair of vectors u and v in V, associates a real number <u,v> and satisfies the following axioms, for all u,v, and w in V and all scalars c:
1.<u,v>=<v,u>
2.<u+v,w>=<u,w>+<v,w>
3.<cu,v>=c<u,v>
4.<u,u>≥0 and <u,u>=0 if and only if u=0
A vector space with an inner product is called an inner product space.
Trend Analysis of Data
g^=c0p0+c1p1+c2p2+c3p3 and g^ is called cubic trend function, and c0,⋯,c3 are the trend coefficients of the data. Thus could use Gram-schmidt process to construct the coefficients c0,⋯,c3
Fourier Series
Any function could be approsimated as closely as desired by a function of the formF=a0/2+a1cost+⋯+ancosnt+b1sint+⋯+bnsinnt
the set {1,cost,cos2t,⋯,cosnt,sint,sin2t,⋯,sinnt} si orthogonal with respect to the inner product<f,g>=∫02πf(t)g(t)dt Thus ak=<coskt,coskt><f,coskt>,bk=<sinkt,sinkt><f,sinkt>,k≥1.
标签:v1,gt,Linear,Orthogonal,Introduction,cdots,cdot,lt,nm 来源: https://blog.csdn.net/zbzhzhy/article/details/88051059