9
$\begingroup$

I am taking a course in ODE, and I got a homework question in which I am required to:

  • Calculate the Wronskians of two function vectors (specifically $(t, 1)$ and $(t^{2}, 2t)$).
  • Determine in what intervals they are linearly independent.

There are more parts to this question, but I figured that I will deal with them when I understand the basic concepts better. So these are my questions:

  • I know how to calculate the Wronskian of n functions: $f_{1} \cdots f_{n}: \begin{vmatrix} f_{1} & \cdots & f_{n} \\ \vdots & & \vdots \\ f_{1}^{(n-1)} & \cdots & f_{n}^{(n-1)} \end{vmatrix}$. I assume that when I'm asked to calculate the Wronskian of a function vector, my $n$ functions would be the vector's components?
  • I know that if the Wronskian of $n$ functions is not $0$ for some $t$, I can deduce that they are linearly independent. How can I use this information to find the intervals in which two vectors are independent?

I would love to read a good explanation on why these methods work (sadly, I cannot understand a thing from my notebook and the library is closed on the weekend), so if you could explain it or direct me to a good online resource, preferably not Wikipedia, I will be glad.

And finally, I apologize in advance if I'm not very clear, I am not a native English speaker.

Thanks!

  • 0
    @joriki You are probably right :)2012-01-07

3 Answers 3

18

Let me address why the Wronskian works. To begin let's use vectors of functions (not necessarily solutions of some ODE).

For convenience, I'll just work with $3 \times 3$ systems.

Let $ {\bf f}_1(t) = \begin{bmatrix} f_{11}(t) \\ f_{21}(t) \\ f_{31}(t) \end{bmatrix}, \qquad {\bf f}_2(t) = \begin{bmatrix} f_{12}(t) \\ f_{22}(t) \\ f_{32}(t) \end{bmatrix}, \qquad \mathrm{and} \qquad {\bf f}_3(t) = \begin{bmatrix} f_{13}(t) \\ f_{23}(t) \\ f_{33}(t) \end{bmatrix} $ be vectors of functions (i.e. functions from $\mathbb{R}$ to $\mathbb{R}^3$).

We say the set $\{ {\bf f}_1(t), {\bf f}_2(t), {\bf f}_3(t) \}$ is linearly dependent on $I \subseteq \mathbb{R}$ (some set of real numbers) if there exist $c_1,c_2,c_3 \mathbb{R}$ (not all zero) such that $c_1{\bf f}_1(t)+c_2{\bf f}_2(t)+c_3{\bf f}_3(t)={\bf 0}$ for all $t \in I$. [Be careful here: this equation must hold for all $t$'s in $I$ simultaneously with the same constants.]

This equation can be recast in terms of matrices. We have linear dependence if and only if there exists some constant vector ${\bf c} \not= {\bf 0}$ such that ${\bf F}(t){\bf c}={\bf 0}$ for all $t \in I$. This is where ${\bf F}(t) = [{\bf f}_1(t) \;{\bf f}_2(t) \;{\bf f}_3(t)]$. Or writing it out in a more expanded form:

$ {\bf F}(t){\bf c} = \begin{bmatrix} f_{11}(t) & f_{12}(t) & f_{13}(t) \\ f_{21}(t) & f_{22}(t) & f_{23}(t) \\ f_{31}(t) & f_{32}(t) & f_{33}(t) \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}$

Now the determinant of ${\bf F}(t)$ is known as the Wronskian of the functions ${\bf f}_1,{\bf f}_2,{\bf f}_3$. That is $W(t) = \mathrm{det}({\bf F})(t)$.

Now we call on basic Linear algebra. The columns of an $n \times n$ matrix $A$ are linearly dependent if and only if there is a non-trivial (i.e. non-zero) solution of $A{\bf x}={\bf 0}$. This is true if and only if $\mathrm{det}(A)=0$.

But be very careful this is for a system of constants (not functions). To show the columns of ${\bf F}$ are linearly dependent we need there to be a non-zero solution for all $t$ in $I$.

So only the following can be said: IF the columns of ${\bf F}(t)$ are linearly dependent on $I$, THEN there is a non-zero solution for ${\bf F}(t){\bf c}={\bf 0}$ which works for all $t$ in $I$. Thus $W(t)=\mathrm{det}({\bf F})(t)=0$ for all $t$ in $I$.

The converse does not hold in general.

However, for sets of solutions of linear systems of ODEs, Abel's Identity shows that the Wronskian is a constant multiple of an exponential function. Thus if it's zero somewhere, it's zero everywhere (well, everywhere these are solutions anyway). So in intro to DEs classes, professors will commonly state that we have linear dependence if and only if the Wronskian zero and then proceed to use examples for which this theorem isn't necessarily true! The implication does not go both ways in general.

Now finally, how to connect this back to regular functions? Well, consider functions $f,g,h$. Then they are linearly dependent on some set of real numbers $I$ if we can find $a,b,c \in \mathbb{R}$ (not all zero) such that $af(t)+bg(t)+ch(t)=0$ for all $t$ in $I$. If we differentiate this again and again, we'll get other equations:

\begin{array}{ccc} af(t)+bg(t)+ch(t) & = & 0 \\ af'(t)+bg'(t)+ch'(t) & = & 0 \\ af''(t)+bg''(t)+ch''(t) & = & 0 \end{array}

Now we're back to the discussion about linear independence in reference to the set: \left\{ \begin{bmatrix} f(t) \\ f'(t) \\ f''(t) \end{bmatrix}, \begin{bmatrix} g(t) \\ g'(t) \\ g''(t) \end{bmatrix}, \begin{bmatrix} h(t) \\ h'(t) \\ h''(t) \end{bmatrix} \right\}

So the Wronskian you're using is a special case of the Wronskian for systems of ODEs.

I hope this clears up a little!

  • 0
    Yes, if you replace an n-th order equation with a system of n 1st order equations in the usual way, the new variables introduced end up being equal to the derivatives of the original variable you're looking for.2018-08-10
2

The problem seems to have been solved in the discussion in the comments: The "function vectors" are vectors containing the (zeroth and first) derivatives of a function, and thus the Wronskian is the determinant of the matrix formed of those vectors as columns.

0

I don't have rep yet, but I want to thank Bill! I was wondering this myself and (eventually) reasoned the same thing that he wrote. The forwards implication from Linear Algebra makes sense for linear independence, and what stalled me at the converse was this counter example:

$ \left\{ \begin{bmatrix} t \\ t^3 \end{bmatrix}, \begin{bmatrix} t^2 \\ t^4 \end{bmatrix} \right\} $

If we take the determinant of the matrix with these vectors as its collumns, its determinant is $t^5 - t^5 = 0$ for all times $t$. But, the set is linearly independent (sticking with the definition of finding a nontrivial solution to zero).