載入中...
相關課程

登入觀看
⇐ Use this menu to view and help create subtitles for this video in many different languages.
You'll probably want to hide YouTube's captions if using these subtitles.
Lin Alg: Representing vectors in Rn using subspace members : Showing that any member of Rn can be represented as a unique sum of a vector in subspace V and a vector in the orthogonal complement of V.
相關課程
0 / 750
- Let's say I have some subspace V, that is a subset of Rn.
- And let's say that we also have its orthogonal
- complement, we write that as V perp.
- That'll also be a subset of Rn.
- A couple of videos ago, it might have even been the last
- video if I remember properly, we learned that the dimension
- of V, plus the dimension of the orthogonal complement of
- V, which is also another subspace, is going
- to be equal to n.
- Remember dimension is just the number of linearly independent
- vectors you need to have a basis for V.
- And the dimension here is the number of linearly independent
- vectors you need to have a basis for the orthogonal
- complement of V.
- Now given this, let's see if we can come up with some other
- interesting ways in which these two subspaces relate to
- each other.
- Or how they might relate to all of the vectors in Rn.
- So the first question is, do these two subspaces have
- anything in common?
- Are there any vectors that are in common with the two.
- And to test whether there is, let's just assume there is,
- and see what the properties of that vector would have to be.
- Let's assume right here that I have some vector x that is a
- member of my subspace V.
- Let's also assume that x is a member of the orthogonal
- complement of V.
- Now what does this second statement mean?
- Membership in the orthogonal complement means that x dot v,
- for any v that is a member of our subspace, is going to be
- equal to 0.
- Let me write it this way actually.
- x dot v is equal to 0 for any v that is a
- member of our subspace.
- That's what it means to be a member of V's orthogonal
- complement.
- Now we assume that x is also a member of V.
- So that means that we can stick x here as well.
- For any member of V.
- x is also a member of V.
- So that implies that x dot x is equal to 0.
- Another way to write that is that the length of x squared
- is equal to 0.
- Or the length of x is equal to 0.
- And that's only true for one vector.
- You can even try it out with the different
- constituents of x.
- The only vector that that's true for is the 0 vector.
- So x has to be equal to the 0 vector.
- That's the only vector in Rn that when you dot it with
- itself you get 0, or whose the square of its
- length is equal to 0.
- And we've shown that many, many, many videos ago.
- What this tells us is at that the intersection between V and
- V complement-- this kind of upside down U just means
- intersection, it just means where do these two sets
- overlap-- the only place that these overlap is with a subset
- of the 0 vector.
- So if I were to draw all of Rn like this.
- Let's say that this is Rn.
- And let's say I draw the subspace V.
- And let's say I draw the orthogonal complement to V.
- It's all of these vectors right here.
- This is the orthogonal complement to V right there.
- So this is V perp.
- These are all of the vectors that when I dot it with any
- vector here, I'm going to get equal to 0.
- So this is V perp.
- The intersection, their overlap, the only vector that
- is a member of both is the 0 vector.
- That's their only intersection.
- So that's fair enough.
- The only vector that's a member of a subspace and its
- orthogonal complement is the 0 vector.
- Nothing too profound there.
- Let's see if we can come up with some other interesting
- relations between the subspace and its orthogonal complement.
- Maybe some arbitrary vectors in Rn.
- So let's just write down-- well we know that the
- dimension of our subspace V is equal to k.
- If its equal to k, we know that its dimension plus its
- orthogonal complement has to be equal to n, because we're
- dealing in Rn.
- And we also know that the orthogonal complement of V is
- a subset of Rn, I drew it right here.
- The dimension of V is equal to k.
- That's a k right there.
- And what's the dimension of the orthogonal complement of V
- going to be?
- Well, when you add them together-- I wrote that up
- here-- they have to equal n.
- So this guy's going to have to be n minus k.
- If you have k here.
- This guy's dimension is k, this guy's dimension right
- here if, it's n minus k, that when you add these two up, k
- plus n minus k is going to be equal to n.
- So this guy will have a dimension of n minus k.
- Now what does dimension mean?
- It means that that's the number of linearly independent
- vectors you need to form a basis.
- I have k vectors as a basis for V.
- I have v1, v2, all the way to vk.
- And this is a basis for V, which just means they're all
- linearly independent.
- And they span V.
- Any member of V right here can be represented as a linear
- combination of these vectors.
- Now the dimension of the orthogonal complement of
- V is n minus k.
- So we could have n minus k vectors.
- Let's call them w1, w2, all the way to wn minus k.
- We have n minus k of these characters.
- And this set is a basis for the orthogonal
- complement of V.
- So any vector in here can be represented as a linear
- combination of these guys right here.
- And all of these guys are linearly independent.
- So you don't have any redundant vectors there.
- Now let's explore.
- And I'll tell you where I'm trying to go.
- I'm trying to see if I combine these two sets, whether I get
- a basis for all of Rn.
- That's what I'm trying to understand.
- Let's just say that for some constants c1 times v1 plus c2
- times v2 plus all the way to ck times vk plus-- for the
- constants on these guys I'll use d-- plus d1 times w1 plus
- d2 times w2, all the way to plus dn minus k times the
- basis vector wn minus k.
- Let's say that I'm curious about setting this sum equal
- to 0, equaling the 0 vector for some scalers.
- The scalers are these c's and these d's.
- And we know that there's at least one solution set of
- scalers for which this is true.
- We could multiply all of these constants-- c1, c2, ck, d1,
- d2, all the way to dn minus k.
- They could all be 0.
- Or there might be more than one solution.
- In fact, if the only solution is that all of these constants
- have to be equal to 0, then we know that all of these vectors
- are linearly independent with respect to each other.
- And if they're all linearly independent with respect to
- each other, then we know that they can be a basis for Rn.
- But we don't know that yet.
- We don't know that the only solution to this is all of the
- constants being equal to 0.
- So let's see if we can experiment with
- this a little bit.
- If we take this equation, which I just wrote down, we
- know that one solution is all of the constants, the c's and
- d's equalling to 0, but we don't know that
- that's the only one.
- Let's just subtract all of the w vectors from both sides of
- this equation.
- So what are we going to get?
- We're going to get c1, v1 plus c2, v2 all the
- way to plus ck, vk.
- And we're going to subtract this from both
- sides of the equation.
- It's going to be equal to the 0 vector.
- Which is really just 0, I don't even have to write it
- down, but maybe I'll write it down there just so you
- understand.
- I'm just taking this equation, I'm subtracting these guys
- from both sides.
- So 0 vector minus d1, w1 plus d2, w2 plus all the way to dn
- minus k, wn minus k.
- All I did is I subtract these terms right here from both
- sides of this equation.
- I don't even have to write this is 0 here,
- that's a bit redundant.
- So what I have here is I have some combination of the basis
- vector of V.
- So if I look at this, this is some linear combination of the
- basis vectors in V.
- If I call this a vector-- let me call this some vector x.
- Let's say x is equal to c1, v1 plus c2, v2 all
- the way to ck, vk.
- We know that it's a linear combination of our basis
- vectors of V, so x is a member of V.
- By definition, any linear combination of the basis
- vectors for a subspace is going to be a
- member of that subspace.
- Well, similarly, what do we have on the right-hand side of
- this equation?
- On the right-hand side of this equation, I have some linear
- combination of the basis vectors of V complement You
- could put just put a minus all along that, but that won't
- change the fact that this is some linear combination of V
- complement's basis vectors.
- So this vector over here is going to be a member of-- we
- could also call this x.
- So x is equal to this, but its also going to be equal to
- this, and since it can be represented as a linear
- combination of the orthogonal complement of V's basis
- vector, or V perp's basis vector, we know that this also
- has to be a member of V perp.
- Let me just review this, because it can be
- a little bit confusing.
- I just set up this equation right here.
- We know that there's at least one solution-- all of the
- constants equalling 0.
- Anyone could do this.
- Now I subtracted all of the yellow terms from both sides,
- and I got this equality.
- The left=hand side of this equality is linear
- combinations of the basis vectors of V.
- So any linear combinations of the basis vectors of V is
- going to be a member of V.
- That's the definition of basis vectors.
- So if I set x equalling to this letf-hand side, I can say
- that x is a member of V.
- Well, if x is equal to the left-hand side, it's also
- equal to the right-hand side.
- The right-hand side is some linear combination of V perps,
- or the orthogonal complement of V's basis vectors.
- Which tells us that x is also a member of V perp.
- Well, what does that mean?
- That means that x must be equal to 0.
- I just showed you at the beginning of the video, the
- only vector that's a member of a subspace and its complement
- is the 0 vector.
- So we know that because these are orthogonal complements, we
- know that x must be equal to 0.
- So just to reiterate, we know 0's has to equal both of these
- sides of the equation.
- And these are the same constants that we
- had to begin with.
- But what do we know about these two sets?
- We know that the 0 vector has to be equal to this.
- That's the only vector in Rn that's a member both of V and
- of the orthogonal complement of V.
- Now, this is a 0 vector and we have this linear combination
- of V's being set equaling to the 0 vector.
- What do we know about these constants?
- What does c1, c2, all the way to ck have to be?
- We know that v1 through vk is a basis for V.
- That tells us that they span V and that they are linearly
- independent.
- Linear independence by definition means that the only
- solution to this equation right here is that all of the
- constants have to be 0.
- So linear independence tells us that c1, c2, all the way
- through ck must be 0.
- All of these guys right here are 0.
- Which is the same as all of these guys.
- All of these guys must be 0.
- Now let's look at the right-hand
- side of this equation.
- We could put the minus all the way, but the
- same argument holds.
- This linear combination of V perp's basis
- vectors is equal to 0.
- The only solution to this-- because each of these w1's,
- w2's, and wn minus k's are linearly independent-- being
- equal to 0 is all of the constants have
- to be equal to 0.
- That falls out of linear independence.
- If this negative is confusing you a bit, if it makes it look
- different than that, you could just multiply this negative
- out and say minus d1 would have to be equal to 0, minus
- d2 would have to be 0, minus d and minus k
- would have to be 0.
- But it's the exact same argument.
- Linear independence, which falls out of the fact that
- this is a basis set, implies that the only solution to this
- being equal to 0 is each of the constants
- being equal to 0.
- Well, that means that d1, d2, all the way to dn
- minus k must be 0.
- Let's go back to what I wrote up here.
- This was the original equation that we were
- experimenting with.
- Just manipulating this equation a bit and
- understanding that the only intersection between V and V
- perp is a 0 vector.
- And that you only have linear independence if the only
- solution to these vectors equalling 0 is all of their
- constants equalling 0.
- Then we know that all of these terms, c1 through ck, d1
- through dn minus k, they all have to be equal to 0.
- That's the only solution to this larger equation that I
- wrote up here.
- Well, the only solution to this large equation that I
- wrote up here is that all of the constants are equal to 0.
- That implies that if I were it take the set right here of v1,
- v2, all the way to vk, and I were to augment that with the
- basis vectors of V perp, which are w1, w2, all the way to wn
- minus k, that this is a linearly independent set.
- And I know that because the only solution to this equation
- is each of these constants having to be equal to 0.
- That's what linear independence means.
- This implies this.
- Linear independence implies that.
- We used the fact that linear independence implies that all
- of these equal 0 to get the fact that c1 all the way to ck
- was equal to 0.
- And then we use it again when we set this thing also being
- equal to the 0 vector.
- We knew that all of the d's had to be equal to 0.
- I don't know if you remember, the 0 vector came out from the
- fact that that was the only vector that is a
- member of both sets.
- I know I'm being a little bit repetitive, but I really want
- you to understand that this proof isn't some type of
- circular proof.
- That we just wrote this equation, we wondered about
- what the solution set is to it, we rearranged it, we said
- hey both sides of this equation are members of both V
- and V perp.
- The only vector that's a member of
- both is the 0 vector.
- So both of these sides of the equation have
- to be equal to 0.
- The only solution to that is all of these constants being
- equal to 0, because each of these are
- linearly independent sets.
- So therefore all of these constants have
- to be equal to 0.
- And then this augmented set, where if you combined all of
- the basis vectors, that is going to be linearly
- independent.
- Now many, many, many, many, many videos ago, we learned
- that if we have some subspace with dimension n, and we have
- n linearly independent vectors that are members of your
- subspace, then those n linearly independent vectors,
- or the set of your n vectors, is a basis for the subspace.
- Now Rn is a subspace of itself.
- Rn is an n dimensional subspace.
- We could write the dimension of Rn is equal to n.
- Now we have n linearly independent vectors in Rn.
- So that tells us that these guys right here
- are a basis for Rn.
- We have n linearly independent vectors.
- We have n minus k that are coming from V perp.
- We have n that are coming from V from their
- basis for those subspace.
- So now we have a total of n vectors.
- They're linearly independent.
- They're all members of Rn.
- So they are a basis for Rn.
- Which tells us that any vector in Rn can be represented by
- linear combinations of these guys, which is fascinating.
- So this is a basis for Rn.
- So that tells us that we can take any vector-- let's say a
- is a member of Rn, some vector.
- That means since this is a basis for Rn that a can be
- represented to some linear combination of
- all of these guys.
- So it can be represented as C1 times V1 plus C2 times V2 all
- the way to plus Ck times with Vk.
- Let me use a different letter just to make sure that you
- understand that this is a different equation that I'm
- writing than I wrote earlier in the video.
- So I can write this, and then I can have some other
- constants that say plus e1 times our V perp basis vector
- 1 plus e2 times this guy plus all the way to en minus k
- times the n minus k basis vector for V perp.
- I can represent any vector in Rn this way.
- Or another way to say it.
- What is this?
- This is some vector that is a member of our subspace V.
- And then this is some vector over here that is a member of
- the orthogonal complement of V.
- This is just a linear combination of V
- perp's basis vectors.
- This is just a linear combination
- of V's basis vectors.
- So given that all of these characters are a basis for Rn
- tells us that any member of Rn can be represented as a linear
- combination of them.
- But that means that any member of Rn can be represented as a
- sum of a member of our subspace V plus a member of
- your subspace V perp.
- This is a member of V, and this is a member of V perp.
- And that's a really, really interesting idea.
- You give me a subspace and then we can figure out its
- orthogonal complement.
- Any other vector in Rn can be represented as a combination,
- or sum, of some vector in our subspace and some vector in
- its orthogonal complement.
- Now the next question you might be asking, is this
- representation unique?
- So is this unique?
- Well let's test it out by assuming it's not unique.
- So that means that I have for some vector a that is a member
- of Rn, I can represent it two ways.
- I can represent it as equalling some member of my
- subspace V, plus some member of the orthogonal
- complement of V.
- I can represent it that way.
- Or I could represent it as some other member of my
- subspace V plus some other member of my orthogonal
- complement.
- So x1, x2 are members of V perp.
- And then v1 and v2 are members of V.
- If we assume it's not unique, there's two ways
- that I could do this.
- And I'm representing it as these two vectors.
- Now clearly this side of this equation is equal to that.
- These are both representations of a.
- So we can rearrange this a little bit.
- We could say that v1 minus v2-- if I subtract v2 from
- both sides, I get v1 minus v2 is equal to-- that's
- subtracting v2 from both sides, and if I subtract x1
- from both sides-- x2 minus x1.
- These are both members of the subspace V.
- And any subspace is closed under addition and
- subtraction, which is really a special case of addition.
- The vector v1, let me write it this way, let me call my sum
- vector z, being equal to both of these guys which are equal
- to each other.
- z is the vector V1 minus V2.
- Any subspace is closed under addition, if you take two
- vectors and find their difference in a subspace, then
- that resulting difference is also
- going to be in a subspace.
- z is going to be a member of our subspace V.
- This vector right here-- which is also the same thing we just
- said that to be equal our vector z-- is going to be a
- member of our V perp.
- Why?
- Because both x1 and x2 are members of the subspace V's
- orthogonal complement.
- And that is a subspace as well.
- So it is closed under addition and subtraction.
- So this is also going to be a member of your subspace.
- So we could also say that z is a member of V perp or the
- orthogonal complement of V.
- Well, we've done this multiple times.
- This was the first thing we showed in the video.
- The only vector that's a member of a subspace and its
- orthogonal complement is the 0 vector.
- So z has to be equal to the 0 vector.
- So this is equal to the 0 vector.
- Well, if both of these are equal to the 0 vector, we know
- that v1 minus v2 is equal to the 0 vector, which implies
- that v1 must be equal to v2.
- And we also know that x2 minus x1 is equal to the 0 vector.
- Or x2 is equal to x1.
- So we try to say that, hey, there's two ways to construct
- some arbitrary vector a that's in Rn.
- And we wrote that down.
- But then we found out that no, v1 must be equal to v2 and x1
- must be equal x2.
- So there's only a unique way to write any member of Rn as a
- sum of a vector that's in our subspace V and a vector that
- is in the orthogonal complement of V.