# Faith and Science: Symbolic Logic

Almost all of today’s class was spent using logical equivalences and rules for inference in their symbolic form to verify the validity of various arguments. One of my favorites was the following:

If the Mosaic account of cosmogony (the account of the creation in Genesis) is strictly correct, the sun was not created till the fourth day. And if the sun was not created till the fourth day, it could not have been the cause of the alternation of day and night for the first three days. But either the word “day” is used in Scripture in a different sense from that in which it is commonly accepted now or else the sun must have been the cause of the alternation of day and night for the first three days. Hence it follows that either the Mosaic account of the cosmogony is not strictly correct or else the word “day” is used in Scripture in a different sense from that in which it is commonly accepted now.

We label the various statements that make up this argument by $$M, C, A, D$$. Thus the argument takes the form:
$$\begin{array}{ll} 1.& M \Rightarrow \ \sim C \\ 2. & \sim C \Rightarrow \ \sim A \\ 3. & D \vee A \\ \therefore & \sim M \wedge D \end{array}$$

The proof goes like this:
$$\begin{array}{lll} 4. & M \Rightarrow \ \sim A & \mbox{from 1,2 by Hypothetical Syllogism}\\ 5. & A \vee D & \mbox{from 3 by Commutativity}\\ 6. & \sim \sim A \vee D & \mbox{from 5 by Double Negation} \\ 7. & \sim A \Rightarrow D & \mbox{from 6 by Material Implication}\\ 8. & M \Rightarrow D & \mbox{from 4,7 Hypothetical Syllogism} \\ 9. & \sim M \vee D & \mbox{from 8 Material Implication}\end{array}$$

Next time, a colleague from the Division of Mathematics and Sciences will take over for a couple classes, helping us to understand the scientific method cycle and the historical development of modern science.

# Calculus IV: Tangent Planes and Linearization

In Calculus IV on Thursday, February 8th, we covered the derivation of tangent planes. We also showed how the tangent plane for functions of 2 variables generalizes to functions of several variables, a process we call linearization.

We also used this concept of linearization to define differentiability for functions of several variables. We also covered the concept of differentials.

I know that in this day and age, with technology so available, I should be utilizing some tools on the computer to draw these pictures that help us visualize these concept. Honestly, I just really enjoy drawing the pictures by hand. Having taught this course for several years now, I think I’m getting pretty good at it. I’d post a picture, but I wouldn’t want anyone to disagree with me and hurt my feelings.

Next time, we cover the chain rule and will start directional derivatives.

# Intermediate Analysis: More order principles and absolute values

We had no homework assignments due on Thursday, so I dove right into lecture. We proved a few more results following from our ordering of the reals. Upon completing these theorems, we were then able to do the basic sorts of solving of inequalities we teach in our lowest level of algebra. It is interesting that we get to one of our final senior undergraduate courses and start doing things we cover in the earliest mathematics courses, but this time, we’ve built our understanding of these rules from the ground up. Have no fear, analysis students, we will go much, much beyond those mathematical tools we used intermediate algebra.

Next, we defined the concept of absolute value and some simple results of that definition. The next big thing after that was to use absolute values to create $$\varepsilon$$-neighborhoods which will provide us the necessary structures to define open and closed sets, cluster points, limits, etc.

Before we get to those concepts, however, we will divert to another low-level algebra topic of solving absolute value equations and inequalities. That will happen next time at the very beginning of class.

# Linear Algebra: Partitioning Matrices

On Wednesday, last week, we completed our material over elementary matrices, using them to derive the inverse of a matrix. Upon proving that a matrix is non-singular (i.e., invertible) if and only if it is row equivalent to the identity, we noticed that the same row operations that change a matrix A into the identity will change the identity into the inverse of A.

We, then, closed off that section by looking at how to form an LU decomposition, or factorization, of a matrix. The basic algorithm we used to row reduce a matrix to upper triangular form keeping track of the elementary matrices used, then computing L as the product of the inverses of those elementary matrices. Unfortunately, I forgot to mention (and WILL mention in the next class) that this only works if we use only the row operation of type III. That fact guarantees that the product of the inverses of those elementary matrices will be lower triangular.

After finishing section 4, we started to talk about partitioning matrices. We are simply trying to show that all of our matrix algebra works the same when the elements of the matrices are also matrices. I’m having a hard time convincing the students of the significance of this, since in any given matrix calculation, it is just as easy to compute the sums and products with simple matrices as with block matrices. As I see it, the greatest benefit to using block matrices is to deal with matrices that have a specific block structure that is maintained after calculation.

Next time, we will finish this section and be ready to schedule an exam.