Category Archives: Linear Algebra

Linear Algebra: Polynomial Interpolation

Today, in class, I answered some homework questions from Section 1.4, including the following problems:

  1. Show that an upper triangular matrix with non-zero diagonal entries is nonsingular. (We’ve not covered determinants yet so we used a row-equivalence to the Identity argument)
  2. Show that the inverse of an upper triangular matrix is upper triangular. (We used the fact that the same row operations that reduce a matrix to the identity will reduce the identity to the inverse of a matrix. The row operations that reduce an upper triangular matrix to the identity will necessarily change the identity into an upper triangular matrix.)
  3. Given the matrices, [tex]A[/tex] and [tex]C[/tex], solve the matrix equation: [tex]XA + C = X[/tex].

The last homework example was over the Vandermonde matrix system, proving that the Vandermonde system is equivalent to polynomial interpolation. We also proved that if the values of the [tex]\mathbf{x}[/tex]-vector (independent variables) are distinct, then the Vandermonde matrix is non-singular. As you might imagine, I got very excited and energetic about explaining this example. Polynomial interpolation was one of the first topics from Numerical Analysis that I fell “in love” with. It set me down the path to becoming a numerical analyst.

Originally, we had a text scheduled for the next class but because I spent a significant portion of class time on polynomial interpolation, I decided to postpone the exam until after I can answer some questions from the review and last section’s homework. We’ll do this next time and if there are no questions we’ll move on to the next chapter: a short one on determinants.

Linear Algebra: Block Matrix Multiplication

In class today, we finished the section on partitioning matrices. We verified that that block matrices obey the same rules for matrix algebra. In particular, block matrix multiplication works as scalar multiplication as long as the dimensions are appropriate for the sub-matrices to be multiplied together.

We looked at a couple of examples how we can evaluate the structure of larger matrices using block matrices. For example, we prove that the given matrix:

[tex]A= \left[ \begin{array}{cc}A_{11} & O \\ A_{21} & A_{22} \end{array}\right] [/tex]

is nonsingular iff [tex]A_{11}[/tex] and [tex]A_{22}[/tex] are nonsingular. We derived that:

[tex]A^{-1} = \left[ \begin{array}{cc}A_{11}^{-1} & O \\[2ex] -A_{22}^{-1}A_{21}A_{11}^{-1} & A_{22}^{-1} \end{array}\right][/tex]

I then used a rather manufactured example to demonstrate how we might make use of this fact. Find the inverse of the following matrix:

[tex]A=\left[\begin{array}{cc|ccc} 1 & 2 & 0 & 0 & 0 \\ 2 & 3 & 0 & 0 & 0\\ \hline 4 & 3 & 1 & 0 & 0 \\ 1 & 2 & -2 & 1 & 0 \\ 9 & 2 & 0 & 0 & 1 \end{array}\right] [/tex]

I made this matrix up so that [tex]A_{11}^{-1}[/tex] and [tex]A_{22}^{-1}[/tex] are easily calculable and thus, so is [tex]A^{-1}[/tex].

I just started to touch on outer product expansions and so I’ll finish that up next time. The first exam has been scheduled for Monday, February 19. I handed out a review and will post an answer key to the review in Blackboard.

Linear Algebra: Partitioning Matrices

On Wednesday, last week, we completed our material over elementary matrices, using them to derive the inverse of a matrix. Upon proving that a matrix is non-singular (i.e., invertible) if and only if it is row equivalent to the identity, we noticed that the same row operations that change a matrix A into the identity will change the identity into the inverse of A.

We, then, closed off that section by looking at how to form an LU decomposition, or factorization, of a matrix. The basic algorithm we used to row reduce a matrix to upper triangular form keeping track of the elementary matrices used, then computing L as the product of the inverses of those elementary matrices. Unfortunately, I forgot to mention (and WILL mention in the next class) that this only works if we use only the row operation of type III. That fact guarantees that the product of the inverses of those elementary matrices will be lower triangular.

After finishing section 4, we started to talk about partitioning matrices. We are simply trying to show that all of our matrix algebra works the same when the elements of the matrices are also matrices. I’m having a hard time convincing the students of the significance of this, since in any given matrix calculation, it is just as easy to compute the sums and products with simple matrices as with block matrices. As I see it, the greatest benefit to using block matrices is to deal with matrices that have a specific block structure that is maintained after calculation.

Next time, we will finish this section and be ready to schedule an exam.

Linear Algebra: Elementary Matrices

I don’t think it is possible for me to cover a whole section of this linear algebra book in one class. Today, we started the section on Elementary Matrices and just a little ways in, I knew it was going to be a difficult section.

The main purpose of this section is to use the concept of matrix multiplication to perform row operations and demonstrate the basic results of row equivalence, such as categorizations of nonsingularity: (i) that nonsingular matrices are row equivalent to I, (ii) that the system [tex]A\mathbf{x} = \mathbf{0}[/tex] has a unique solution. Also, elementary matrices provide a way to calculate matrix inverses through row reduction.

However, a good many of the results require a level of proof that is new to the students. I know that many of them are planning on going into engineering, another group is planning on teaching either middle school or secondary mathematics, and last of all, a group will likely go to graduate school in mathematics. It’s tough to design a course in linear algebra to meet all their needs but in the end, it’s worthwhile to see the reason behind the method.

Linear Algebra: Inverse and Transposes

It took three classes but I finally finished all of the section over matrix algebra. During today’s lecture we walked through the concepts of matrix inverse and matrix transposes. It’s amazing how long these lectures stretch out when you choose to demonstrate the concepts with examples involving matrix operations. As far as I can tell, all the students are following along very closely, except when I start going off on a tangent and trying to draw connections between linear algebra and higher level mathematics.

Somehow I did get sidetracked today into talking about my days as an undergrad at Wayland. I recounted the story of my first day of class with Dr. Almes, when he called out several names at the beginning of class, stating that he wanted to see all of us after class. I was actually filled with dread. I just knew that I had already done something to upset my professor. He was practically the head of the math department and I had ruined my chances of making nice with him. What was he going to do to me? I was going to have to change majors again.

My fears were relieved when, after class, he simply thanked us for attending his church the previous Sunday and invited us to return. It hit home the fact for me that Wayland was more than a typical school. It was a family. It was a Christian family where I was going to school with my brother’s and sister’s in Christ and I was being taught by my brother’s and sister’s in Christ. There was no need to separate my learning from my faith, and in fact, in many of my classes they would be closely integrated.

Next time in Linear Algebra, we will cover elementary matrices and derive a computational method for finding matrix inverses.

Linear Algebra: Properties of Matrix Algebra

During Linear Algebra on Monday, I began class by answering homework questions. I clarified the fact that [tex](0, \alpha, -\alpha)[/tex] and [tex](0, -\alpha, \alpha)[/tex] represent the same general solution to a linear system when [tex]\alpha \in \mathbb{R}[/tex]. Somehow the discussion of applications of the techniques we are learning came up. Thus far, we have basically covered how to use Gauss-Jordan elimination to solve linear systems. I pointed to examples from engineering (such as structural analysis of trusses) and computational fluid dynamics (discretization procedures to solve PDEs). I understand that these are a little outside the scope of this class but I chose them because of the individuals asking the question and based on their particular interests in mechanical engineering and aeronautics.

After chasing a couple rabbits, we continued working with operations on matrices, namely matrix addition, scalar multiplication and matrix multiplication. We went over the properties of the operations such at commutativity of addition, associativity of all three, distributive laws, etc. We proved a couple of these.

I recognize that for many of these students, they have little or no background in formulating a formal proof. In light of this, I chose to first prove that matrix addition commutes. Then I chose to show them one of the longer (thought not really much more difficult) proofs, the associativity of matrix multiplication. The real challenge was to help them see through the cumbersome notation, that it simply hinged on the associativity of multiplication of real numbers. I’m not sure that next time I teach this class I would want to bother with this proof. I think it is important that they see proofs of these fundamental concepts but I can accomplish that with a couple of simpler ones.

We just got into inverses and identities and will finish up this section on Matrix Algebra next time. We’ll then be ready to start talking about partitioning matrices and the doing some applications, such as traffic flow, balancing chemical equations, and search engines.