MAT 2705 Syllabus

MAT2705 Differential Equations with Linear Algebra

Elementary use of MAPLE is a required supporting tool in the entire MAT1500-1505-2500-2705 sequence of Calculus and Differential Equations with Linear Algebra for Science and Engineering majors. For use in this course, see below.

Text: Differential Equations and Linear Algebra
Edwards and Penney Edition 3e 2009 Prentice Hall: ISBN-10: 0136054250 ISBN-13: 9780136054252
Textbook web page (click here)

To economize on cost and weight we offer a cheaper paperback
custom edition 3e for Villanova, first 7 chapters (2/3 book):
ISBN-10: 9781256918967

The full textbook can be purchased or rented as a paper book or e-book on the open textbook market or through the Villanova University Shop.

Textbook coverage: Chapters 1-7.
The following core sections are recommended; optional sections are indicated with the square bracket notation [...],
omitted sections with double square bracket notation [[..]] :

1. First-Order Differential Equations.
1.1: Differential Equations and Mathematical Models.
1.2: Integrals as General and Particular Solutions.
1.3: Slope Fields and Solution Curves.
1.4: Separable Equations and Applications.
1.5: Linear First-Order Equations.
1.6: [[Substitution Methods and Exact Equations.]]

2. Mathematical Models and Numerical Methods.
2.1: Population Models.
2.2: [[Equilibrium Solutions and Stability.]]
2.3: [Acceleration-Velocity Models.]
2.4: [Numerical Approximation: Euler's Method.]
2.5: [[A Closer Look at the Euler Method.]]
2.6: [[The Runge-Kutta Method.]]

3. Linear Systems and Matrices.
3.1: Introduction to Linear Systems.
3.2: Matrices and Gaussian Elimination.
3.3: Reduced Row-Echelon Matrices.
3.4: Matrix Operations.
3.5: Inverses of Matrices.
3.6: Determinants. (emphasis on row operation evaluation;
         Cramer's rule and adjoint formula for inverse can be omitted after mentioning)
3.7: [Linear Equations and Curve Fitting.]

4. Vector Spaces.
4.1: The Vector Space R^3.
4.2: The Vector Space R^n and Subspaces.
4.3: Linear Combinations and Independence of Vectors.
4.4: Bases and Dimension for Vector Spaces.
4.5: [[Row and Column Spaces]]
4.6: [[Orthogonal Vectors in R^n.]]
4.7: [General Vector Spaces.]

5. Linear Equations of Higher Order.
5.1: Introduction: Second-Order Linear Equations.
5.2: General Solutions of Linear Equations. (de-emphasize n>2)
5.3: Homogeneous Equations with Constant Coefficients.
5.4: Mechanical Vibrations.
5.5: Nonhomogeneous Equations and Undetermined Coefficients.
(de-emphasize most general case) [[omit variation of parameters]]
5.6: Forced Oscillations and Resonance. (pick carefully from too much material here)

6. Eigenvalues and Eigenvectors.
6.1: Introduction to Eigenvalues.
6.2: Diagonalization of Matrices.
6.3: [Applications Involving Powers of Matrices.]

7. Linear Systems of Differential Equations.
7.1: First-Order Systems and Applications.
7.2: Matrices and Linear Systems.
7.3: The Eigenvalue Method for Linear Systems.
7.4: Second-Order Systems and Mechanical Applications.
7.5: Multiple Eigenvalue Solutions. (non-defective matrices only)
7.6: [[Numerical Methods for Systems.]]

MAPLE

Because of the increasing number of freshmen with advanced credit, little Maple knowledge can be assumed but this is not a significant problem with the increasingly user friendly "clickable calculus" interface philosophy which sidesteps syntax and commands. Introducing MAPLE example and template worksheets associated with textbook homework problems to use MAPLE as a minimal support tool is a good idea, allowing graphing calculators or Maple to substitute for some required mechanical steps in quizzes and tests, as well as to check any hand calculations requested. Most calculators can now do row reductions.

Although the Edwards and Penney website has on-line MAPLE projects in PDF format and the corresponding MAPLE worksheets:  http://wps.prenhall.com/esm_edwards_dela_2 [same for edition 3]
they are not useful in practice in our course. It is more important to have students use a limited number of MAPLE evaluation tasks on regular homework problems assigned from the textbook. In particular, the solution of any set of DEs plus initial conditions should be always available as a check for all students.

NOTE. This course is not about differentiation or integration but what to do with the derivative and integral in the context of differential equations. Every student should learn how to solve a differential equation with initial condition(s) immediately to be able to quickly check any hand solutions (see below for syntax), and should also be using Maple or a graphing calculator to check every antiderivative, and indeed provide the antiderivative if the integration is anything but trivial. This is very easy in Standard Maple with its clickable calculus interface.

In chapter 1, DEplot for directionfields and dsolve for exact solutions (Project 1.6) should be introduced and used together with some homework problems from the text.

In chapter 3 the Linear Solve Tutor and the Student[LinearAlgebra] package (Red...etc, BackwardsSubstitute) should be used for automated row operations and reduction and automated solution of linear systems should be covered. Technology should be emphasized for doing the row operations, since it is extremely difficult to do all the arithmetic in row reduction by hand correctly and arithmetic is not the point: the sequence of row operations is. Later in the course solving the linear system is not the main point, and the complete row reduction should be done at once with technology. Students should know how to compute determinants with technology so they can use their values to draw conclusions, after having at least one technology experience using row reduction without MultiplyRow to evaluate a determinant. Then in chapter 6 the eigenvector tutor and right click eigenvector evaluation should be introduced.

In chapter 5, solve and fsolve or right click access to them should be used for higher order (even quadratic!) polynomial roots.

In chapter 6 and 7, the DEplot command should be extended from chapter 1 to include the phase plane plots for 2-D linear systems and used to motivate and visualize eigenvectors using 2x2 matrices.

Maple specific hints:

For stating differential equations using prime notation, the default differentiation variable x is assumed. inputting one or more differential equations and initial conditions separated by commas, entering the input line and right-clicking on the output allows the choice Solve DE Interactively to bring up an applet, where one can choose Solve Symbolically, then Solve to solve the equations:
>  x1''=x2, x2''=x1,x1(0)=1,x1'(0)=0,x2(0)=0,x2'(0)=1
Click on Quit to close the pop up window and return the solution to your worksheet.

If you want a different default differentiation variable like t without being bothered to change it, simply use explicit function notation with the desired variable:
>   x1''(t)=x2(t), x2''(t)=x1(t),x1(0)=1,x1'(0)=0,x2(0)=0,x2'(0)=1

Don't waste time using subscripted variables like x1 with prime notation, just call it x1 (although the subscripted variable names will work).

Matrices can be entered with the Matrix palette. A superscript of -1 will produce the inverse of a square matrix, while a space " " between matrices will multiply them, without loading the LinearAlgebra or Student[LinearAlgebra] packages. To multiply a matrix by a scalar variable other than a hard number, you must use the asterisk "*" between the scalar and the matrix: 2 A but x*A, where the asterisk is then converted to a centered dot by the 2d input interpreter. Matrices can also be directly entered using < > to enclose rows or lists of rows, using commas to separate entries in a Vector or separate entries vertically in a column and using the vertical symbol " | " to separate entries horizontally in a row.

Right-clicking on a matrix and selecting Standard Operations allows the determinant to be evaluated. Selecting Eigenvalues, etc allows one to get the eigenvalues and eigenvectors from the single choice Eigenvectors, or the preliminary characteristic polynomial.
Right-clicking on a matrix and selecting Solvers and Forms, then Row Echelon Form, then Reduced will give the row reduced echelon form of a matrix. Students should help learn the row reduction algorithm using the Menu Tools, Tutors, Linear System Solving... choice to pop up an applet allowing the matrix to be entered and step by step reduction performed with hints if necessary (choose Gauss-Jordan reduction). This eliminates the mistakes from many arithmetic steps which are not what humans are meant to do except for very small matrices.
Without using this Linear System solving applet, the BackwardsSubstitution command which delivers the solution corresponding to a row reduced augmented matrix is only available after loading the Student[LinearAlgebra] package:
> with(Student[LinearAlgebra]):         also available from the Menu Tools, Load Package, Student Linear Algebra choice.

Students must be reminded to explicitly type an asterisk or leave a space to imply multiplication of two quantities in Maple, and that Euler's number is not obtained by typing the letter "e", but the Expression palette or exp(x) notation must be used to insert an exponential expression.

For more useful interface hints see Maple Examples and Tips.

Syllabus Comments by Course Coordinator

Coverage of the syllabus is a tricky problem here because combining the two topics of differential equations and linear algebra together in one semester requires cutting interesting parts of both. However, this is a terminal course so it is perhaps more important that those topics which are covered convey the enthusiasm of the instructor and accomplish student learning, even at the sacrifice of giving less attention to other parts of the syllabus (particularly those which occur at the end of the semester when time is running out). Thus the individual instructor must decide how to streamline certain parts of the syllabus in order to compensate for the extra attention given to others. Sections which are marked as optional can be mined for whatever interesting example that appeals to a given instructor, if desired, but sparingly.

One can easily overspend time in the first two chapters which is full of applications and optional material, so one must take care to pick wisely. One can streamline the first three sections of chapter 3, emphasizing the rref reduction and relying on MAPLE (optionally graphing calculators) to perform the reduction in practice. Interpretation of the rref form is more important than the distinction between Gauss and Gauss-Jordan reduction, and hand computing determinants can also be de-emphasized. [Row and column cofactor expansions should NOT be covered.] Determinants should be evaluated initially only using row reduction to triangular form without MultiplyRow operations so that a zero or nonzero value of the determinant is related to invertibility of the matrix. Later either MAPLE or graphing calculators  should be used to evaluate determinants in practice.

Emphasis can be given to section 3 in chapter 4 on the R^n vector spaces, streamlining the other 3 required sections. Section 4.6 is not very effective since the most interesting example of non-R^n vector spaces, the solution space of a linear second order DE, has not been covered yet, so the example there is unnatural. The main point that the student should understand is that solving a homogeneous linear system A x = 0 is equivalent to checking on the linear independence of the columns of A, while solving a nonhomogeneous linear system A x = b is equivalent to trying to express the vector b as a linear combination of the columns of A. An inconsistent system means this is not possible. When the columns are linearly independent, one finds a unique solution if one exists.

In chapter 5, one can give section 2 light treatment, omitting Wronskians (or explaining them in the context of the matrix of coefficients in solving the initial condition system), and one can lighten the undetermined coefficient section 5 by not dwelling too much on the most general case. [In practice only constant, exponential and sinusoidal driving functions are of much use.] And section 6 is really full but in practice, this is perhaps the most useful knowledge to take away from the course as far as single higher order DEs are concerned, especially the idea of resonance.

In chapter 6, it makes sense to first motivate eigenvectors with DEplot to show the eigenvector solutions of the linear DE system  x'=A x (not introduced till chapter 7) rather than just doing them first without previewing why they are needed. A very nice interactive Duke University on-line applet really drives home what eigenvectors and eigenvalues mean visually in the 2x2 matrix case.

In chapter 7, one can omit the subsection "simple 2-D systems" in section 1 and only expose students to the idea of reduction of order being necessary to reduce coupled damped oscillator systems to first order form. No need to worry about Wronskians in 7.2. Note that students have trouble with complex arithmetic (algebra!) which requires review, but complex eigenvectors should certainly be covered. The second order undamped multiple spring systems should be covered as the final topic, since this unites both chapter 5 and the eigenvector technique, tying together the major concepts of the course and provides a toy system with some connection to everyday intuition.

In my mind chapters 3 and 4 for matrix manipulations and then 5, 6 and 7 for second order DEs and systems of DEs are the meat of this course. Chapters 1 and 2 overlap with most Calc2 courses where first order differential equations are usually already covered, although the one new idea encountered immediately there that students have a hard time digesting is that to check a solution of a DE, one must replace the unknown everywhere in the DE by the expression for it in terms of the independent variable, and then simplify till the left and right hand sides agree; if they are not equivalent using algebra, it is not a solution. Chapter 2 allows some useful practice at modeling problems. The one drawback of the E&P presentation of eigenvectors is the failure to connect them with a linear change of coordinates, which is the decoupling mechanism for coupled linear DEs; I develop the 2-dimensional case by getting students to draw in the new coordinate grids over the usual Cartesian grids when introducing new coordinates associated with a new basis of the plane.

---bob jantzen