Last Time: Linear ODEs with Constant Coefficients
We developed a powerful and complete theory for solving homogeneous linear ODEs with constant coefficients. Let’s briefly crystallize the main result.
Summary: Solving Homogeneous Linear ODEs with Constant Coefficients
For the equation :
- The Characteristic Equation: We transform the ODE into an algebraic problem by forming the characteristic polynomial:
- Find the Roots: We find the roots (eigenvalues) of this polynomial, noting their multiplicities .
- Construct the Basis: For each distinct root with multiplicity , we generate linearly independent solutions:
- The General Solution: The general homogeneous solution is the linear combination of all basis functions found in the previous step, with arbitrary constants .
The Inhomogeneous Equation: Method of Undetermined Coefficients
Now, we turn our attention to the full inhomogeneous equation: Our goal is to find just one particular solution, . The general solution will then be .
The most direct method for this, when has a nice form, is the Method of Undetermined Coefficients. The philosophy is simple: for a linear system, the form of the response () often mimics the form of the input ().
The Two-Step Process for Undetermined Coefficients
- Choose an Ansatz: We make an “educated guess” (an Ansatz) for the form of based on the form of . This guess will contain unknown constants (the “undetermined coefficients”).
- Determine the Coefficients: We substitute our Ansatz into the ODE. By comparing the coefficients of the functions on the left-hand side (LHS) and right-hand side (RHS), we solve for the unknown constants.
The table below is our guide for choosing the correct Ansatz.
| If is… | The Ansatz for is… |
|---|---|
| or | |
| or | |
| (polynomial of degree ) | (a generic polynomial of degree ) |
| or |
The Superposition Principle
If is a sum of functions, like , the linearity of the ODE allows us to use a powerful shortcut. We can find a particular solution for and a particular solution for separately, and then simply add them together. This is a direct consequence of the operator being linear: .
Examples in Action
Let’s work through some examples to see how this method plays out.
Example 1: A Simple Exponential
Solve .
1. Homogeneous Solution (): The characteristic equation is , which factors as . The roots are . So, .
2. Particular Solution (): The right-hand side is . Our table suggests the Ansatz . We need its derivatives: and . Substitute into the ODE: Factor out : Our particular solution is .
3. General Solution:
Example 2: A Trigonometric Function
Solve .
1. Homogeneous Solution (): Same as before: .
2. Particular Solution (): The RHS is . Even though there’s no cosine term, our Ansatz must include both sine and cosine: Derivatives: Substitute into the ODE: Now, group the and terms separately: For this to be true for all , the coefficients must match: This is a system of two linear equations. From the second, . Substituting into the first gives . Then . Our particular solution is .
3. General Solution:
The Complication: Resonance
What happens if our choice of Ansatz for is already a solution to the homogeneous equation? If we plug it in, the left side will evaluate to zero, and we’ll get an equation like , which is impossible. This special case is called resonance.
The Resonance Rule
If any term in your initial Ansatz is already a solution to the homogeneous equation, you must modify the Ansatz by multiplying it by . If that new term is still a homogeneous solution (which happens with repeated roots), multiply by again, until no part of the Ansatz is a homogeneous solution.
This mathematical rule has a profound physical meaning. It describes what happens when you “drive” a system at its natural frequency. The system’s amplitude grows without bound—think of pushing a child on a swing at just the right moment, or the famous Tacoma Narrows Bridge collapse.
Example 3: A Case of Resonance
Solve .
1. Homogeneous Solution (): We know .
2. Particular Solution (): The RHS is . Our standard Ansatz would be . But wait! is part of our homogeneous solution. This is a case of resonance.
Modification: We must modify our Ansatz by multiplying by : Now we find the derivatives (using the product rule): Substitute into the ODE: Let’s group the terms. The terms with must cancel out: This gives . Our particular solution is .
3. General Solution:
A Different Beast: Separable Equations
So far, we’ve focused on linear equations. Most nonlinear ODEs are intractable, but there is one important class of (potentially nonlinear) first-order equations that we can solve systematically: separable equations.
Definition: Separable Equation
A first-order ODE is called separable if it can be written in the form: The key is that the right-hand side is a product of a function of only and a function of only .
The name comes from the solution method: we can “separate” the variables. We have successfully isolated all the terms on one side and all the terms on the other. The solution is now found by integrating both sides.
Solving a Separable Equation
Solve .
1. Separate the variables: This is not in the standard form, but we can see it’s separable.
2. Integrate both sides:
3. Solve for y (if possible): Let’s call the new constant .
A Glimpse of What’s Next: Differential Calculus in
Our journey through ODEs concludes our study of single-variable calculus. We now pivot to the core topic of Analysis II: the calculus of functions of several variables.
In Analysis I, we studied functions of the form:
- (scalar-valued, single variable)
- (vector-valued, single variable)
The second case reduces to the first, as we can analyze the function component by component.
The true challenge and richness comes when the domain is multi-dimensional. We want to study functions of the form: When , we call this a scalar field (e.g., a temperature map ). When , we call this a vector field (e.g., a wind velocity map ).
Our first examples of such functions are familiar from linear algebra:
- Linear Maps: , where is an matrix.
- Affine Maps: .
- Quadratic Forms: .
Just as the derivative in one dimension allowed us to approximate a curve with its tangent line, we will develop a new concept of the “derivative” for these multi-variable functions. This new derivative will not be a single number, but a linear map (a matrix!) that provides the best linear approximation to the function at a point. This fusion of calculus and linear algebra is the heart of multivariable analysis.