This is a brief survey of the math required to analyze waves at the first or second year university level. If you did well in grade 12 high school math, you'll probably be able to follow this and learn some new and really cool math.
If \$ (5)^2 = 25 \$ and \$ (-5)^2 = 25 \$, what number can you put in the box so that:
\$$ \Box ^2 = -25 \$$
It turns out that there is no real number such that when you multiply it by itself you get a negative number. But could we invent an imaginary one?
Let's create an imaginary number called \$ i \$ such that:
\$$ i = \sqrt{-1} \qquad \Rightarrow \qquad i^2 = -1 \$$
Even though \$ i \$ is nowhere on the real line (in math, we say that: \$ i \not\in \mathbb{R} \$), we can none-the-less perform interesting mathematical operations with it:
\$$(1 + i) \$$
\begin{align*} (1+i)^2 &= (1+i)\cdot(1+i) \\ &= 1 + 2i + i^2 \\ &= 1 + 2i - 1 \\ &= 2i \end{align*}
\begin{equation*} z^4 = 16 \Rightarrow z^2 = \left\{ \begin{array}{rl} 4 \Rightarrow z &= \pm 2 \\ -4 \Rightarrow z &= \pm 2i \end{array} \right. \end{equation*}
If these weird numbers follow all of the algebra rules without inconsistencies, does it mean they exist as much as the real numbers? Aren't complex numbers a mere creation by mad mathematicians? How about mathematics itself: is it discovered or invented?1)
In a certain way, negative numbers are just as weird as complex numbers: after all, we know what 5 cars look like, but what does −5 cars mean? And yet, in certain context (like temperature), we have no problem using negative numbers. Could it be that there are contexts where complex numbers make sense?
In the same way that we can represent real numbers by a point on the real number line...2):
... we can also represent a complex number graphically on a complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. For example, \$ (1 + i) \$ would be represented as a point 45° up the horizontal axis and \$ \sqrt{2} \$ away from the origin:
You can move the point around to look at other complex numbers on the plane.
Download polar.ggb
To convert between the Cartesian \$(a,b) \$ and the Polar \$ (r \angle \theta) \$ representations, only simple trigonometry and Pythagoras is needed.
\$$ (a, b) \rightarrow (r\angle \theta) \$$ | \$$ (r\angle \theta) \rightarrow (a, b)\$$ |
---|---|
\$$ r^2 = a^2 + b^2 \$$ | \$$ a = r\cos\theta \$$ |
\$$ \tan \theta = \dfrac{b}{a} \$$ | \$$ b = r\sin\theta \$$ |
Note that very often, we use radians instead of degrees for the angle. There are a total 360° or 2π radians in a circle. While most people are used to degrees, a radian is actually much easier to picture:
The complex plane has many useful applications, but one of them allows us to visualize roots of the form \$ z^n = w \$. For example, if we set \$ w = 9 \$ and \$ n = 2 \$ on the graph below, we'll see that the roots of \$z^2 = 9 \$ are \$ z = \pm 3 \$.
Download complexroots.ggb
Without using the graph above, what do you expect the solution(s) to \$ z^3 = 8 \$ will be? That is, what number(s), when multiplied by itself three times gives 8?
Now move \$ w = 8 \$ and \$n = 3 \$ to have a look at the solutions graphically, you might be surprised by what you find.
The Euler identity exposes a deep relationship between trigonometric and exponential functions3):
\$$ e^{i \theta} = \cos \theta + i \sin \theta \$$
Let's use two different ways to verify that this mysterious identity is true.
If we separate this identity into two functions and take their derivatives, we notice that: \begin{align*} && f(\theta) &= e^{i \theta} &\text{ & }&& g(\theta) &= \cos \theta + i \sin \theta \\ \Rightarrow && f'(\theta) &= i e^{i \theta} &\text{ & }&& g'(\theta) &= -\sin \theta + i \cos \theta \\ \Rightarrow && f'(\theta) &= i \cdot f(\theta) &\text{ & }&& g'(\theta) &= i \cdot g(\theta) \end{align*}
We know that there's only one functions \$ h(x) \$ that satisfies the differential equation \$ h'(x) = ah(x) \$, and it is \$ h(x) = A e^{ax} \$ What Euler discovered is that when \$ a = i \$ , there's a second function that also satisfies the same differential equation! These two functions must therefore be one and the same.
Another method to verify the Euler identity is to use Taylor series:
\begin{align*} e^x &= 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \frac{x^5}{5!} + \cdots \\ \sin x &= x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots \\ \cos x &= 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \cdots \end{align*}
In the previous section, we saw that a complex number \$z = a + ib \$ could be represented as a point \$(a, b)\$ on the complex plane, which could also be viewed in polar coordinates as (\$r\angle \theta) \$. We saw that to convert between the Cartesian \$(a,b) \$ and the Polar \$ (r \angle \theta) \$ representations, only simple trigonometry and Pythagoras is needed:
\$$ (a,b) \rightarrow (r\angle \theta) \$$ | \$$ (r\angle \theta) \rightarrow (a,b)\$$ |
---|---|
\$$ r^2 = a^2 + b^2 \$$ | \$$ a = r\cos\theta \$$ |
\$$ \tan \theta = \dfrac{b}{a} \$$ | \$$ b = r\sin\theta \$$ |
This means that:
\begin{align*} z &= a + ib \\ &= r\cos\theta + i r\sin\theta \\ &= r\big( \cos\theta + i \sin\theta \big) \\ &= r e^{i\theta} \end{align*}
This offers another interpretation of the Euler identity as the algebraic conversion between Cartesian and Polar coordinates:
Cartesian | Polar | |
---|---|---|
Graphical | \$$(a, b) \$$ | \$$ (r\angle \theta) \$$ |
Algebraic | \$$z = a + ib\$$ | \$$ z = re^{i\theta} \$$ |
This now allows us to simplify a lot of difficult mathematics. For example let's look at the root problem \$z^3 = 8 \$ again. Since the number 8 on the complex plane is the point \$(8,0)\$, in polar coordinates, it can be any of the following: \$8\angle 0, 8\angle 2\pi, 8\angle 4\pi, \cdots \$ This is because we can go around the circle as many times as we want and return to the same point. Since we expect three roots, let's use the first three polar representations of 8:
\$$ z^3 = 8 = \left\{ \begin{array}{c} 8e^{0 i} \\ 8e^{2\pi i} \\ 8e^{4\pi i}\\ \vdots \end{array} \right. \$$
\$$ \Rightarrow z = \left\{ \begin{array}{lcl} \left(8e^{0 i}\right)^{\frac{1}{3}} &=& 8^{\frac{1}{3}} e^{\frac{0}{3}} = 2\\ \left(8e^{2\pi i}\right)^{\frac{1}{3}} &=& 8^{\frac{1}{3}} e^{\frac{2\pi}{3}i} = 2 \left(\cos \frac{2\pi}{3} + i \sin \frac{2\pi}{3}\right) = 2 \left(-\frac{1}{2} + i \frac{\sqrt{3}}{2}\right) = -1 + i\sqrt{3} \\ \left(8e^{4\pi i}\right)^{\frac{1}{3}} &=& 8^{\frac{1}{3}} e^{\frac{4\pi}{3}i} = 2 \left(\cos \frac{4\pi}{3} + i \sin \frac{4\pi}{3}\right) = 2 \left(-\frac{1}{2} - i \frac{\sqrt{3}}{2}\right) = -1 - i\sqrt{3} \\ &\vdots& \end{array} \right. \$$
If we had used more than 3 numbers, the roots would have started repeating. If we had used less than 3, we would have missed some answers in the same way that there are two answers to \$z^2 = 9 \$ (namely \$z = \pm 3 \$).
Which is the best representation: Cartesian, \$z = a + i b \$, or Polar, \$z = re^{i\theta}\$. As you might expect, it depends on what you're trying to do... For example, let's take:
\$$z_1 = 1 + i = \sqrt{2}e^\left(i\frac{\pi}{4}\right) \quad \text{and} \quad z_2 = -1 + i = \sqrt{2}e^\left(i\frac{3\pi}{4}\right) \$$
Imagine having to add, subtract, multiply, or divide these together. Or raise them to a power, or take a root of them. Which of the two representations do you think would be easiest to use for each operation?
The lesson here is that since the polar representation uses exponents, and exponents turn multiplication into addition4), the polar representation is easiest for multiplication, division, exponentiation, and roots. It's essentially why the dB scale is so useful. But addition and subtraction is intrinsically easier in Cartesian coordinates.
\$$ \cos \theta = \dfrac{e^{i \theta} + e^{-i \theta}}{2} \qquad \text { & } \qquad \sin \theta = \dfrac{e^{i \theta} - e^{-i \theta}}{2i} \$$
\$$ \cos (\theta + \phi) = \cos \theta \cos \phi + \sin \theta \sin \phi\\ \sin (\theta + \phi) = \cos \theta \sin \phi + \sin \theta \cos \phi \$$
\$$ \sin (\theta + \Delta \theta) + \sin (\theta - \Delta \theta) = 2 \cos \Delta \theta \sin \theta \$$
This last result is the basis behind why modulating the amplitude of a carrier produces side bands.
In the physics of wave, we often have to find solutions to the following type of differential equations:
\$$ a \ddot{x}(t) + b \dot{x}(t) + c x(t) = 0 \$$
\$$ \dot{x}(t) = x'(t) = \frac{dx}{dt} \quad \text{and} \quad \ddot{x}(t) = x''(t) = \frac{d^2x}{dt^2} \$$
In our applications, the parameters \$ a, b, c \$ are all real and positive quantities. Even without having studied different equations in any depth, we can imagine that a possible solution to the above differential equation would be: \$ x(t)= e^{rt} \$ since the derivative of an exponential function is itself an exponential function, which is encouraging.
The next step is to try this ——test function'' in the differential equation and see if we can find the values of \$ r \$ that make it work. First we'll need derivatives of the test function:
\begin{align*} & x (t) = e^{rt} \\ \Rightarrow \qquad & \dot{x}(t) = r e^{rt} \\ \Rightarrow \qquad & \ddot{x}(t) = r^2 e^{rt} \end{align*}
When we put these into the differential equation, we get:
\begin{align*} & a \ddot{x}(t) + b \dot{x}(t) + c x(t) = 0 \\ \Rightarrow \qquad & a (r^2 e^{rt}) + b (r e^{rt}) + c (e^{rt}) = 0 \\ \Rightarrow \qquad & e^{rt} (a r^2 + b r + c ) = 0 \\ \Rightarrow \qquad & a r^2 + b r + c = 0 \\ \Rightarrow \qquad &r = - \dfrac{b}{2a} \pm \dfrac{\sqrt{b^2 - 4ac}}{2a} \end{align*}
So what does that result mean? Remember, what we're looking for is the function \$x(t)\$ that satisfies the differential equation \$ a \ddot{x}(t) + b \dot{x}(t) + c x(t) = 0 \$
What we've go so far says that our test function \$x(t) = e^{rt}\$ will satisfy the differential equation if \$r\$ is given by above equation. There is still a lot to unpack however. For example, since \$r\$ contains a square root, it could be real or complex depending on the values of \$a, b,\$ and \$c\$. And as we saw above, if \$r\$ is real, then \$x(t)\$ will be a real exponential function. But if \$r\$ is complex, then we can expect \$x(t)\$ to be some sort of sinusoidal function (recall the Euler Identity).
To simplify the notation, let's define \$\alpha\$ and \$\beta\$ as: \$$ \alpha = \dfrac{b}{2a} \qquad \text{and} \qquad \beta = \dfrac{\sqrt{|{b^2 - 4ac}|}}{2a} \$$
Notice how the absolute value under the square root ensures that \$\beta\$ is always real.
\$r\$ then:
\$$ r = \left\{ \begin{array}{ll} -\alpha \pm \beta & \text{if } b^2 - 4ac > 0,\\ -\alpha \pm i \beta & \text{if } b^2 - 4ac < 0, \end{array} \right. \$$
Let's examine both of these cases in more detail.
When \$ b^2 - 4ac > 0 \$ , \$ r \$ is real and the general solution is:
\begin{align*} x(t) &= A_1 e^{r_1 t} + A_2 e^{r_2 t} \\ &= A_1 e^{( -\alpha + \beta) t} + A_2 e^{( -\alpha - \beta) t} \\ &= A_1 e^{-\alpha t} e^{\beta t} + A_2 e^{-\alpha t} e^{-\beta t} \end{align*}
\$$ x(t) = e^{-\alpha t} ( A_1 e^{\beta t} + A_2 e^{-\beta t} ) \$$
It's normal to have two constants of integration since our differential equation has a second degree derivative in it. To find these constants, we'd need to know more about the system's initial conditions.
When \$ b^2 - 4ac < 0 \$ , \$ r \$ is complex and we'll be using the Euler identity to simplify our solutions
\begin{align*} x(t) &= A_1 e^{r_1 t} + A_2 e^{r_2 t} \\ &= A_1 e^{( -\alpha + i \beta) t} + A_2 e^{( -\alpha - i \beta) t} \\ &= A_1 e^{-\alpha t} e^{i \beta t} + A_2 e^{-\alpha t} e^{-i \beta t} \\ &= e^{-\alpha t} ( A_1 e^{i \beta t} + A_2 e^{-i \beta t} ) \\ &= e^{-\alpha t} \Big( A_1 \big(cos( \beta t) + i \sin( \beta t) \big) + A_2 \big(cos( -\beta t) + i \sin( -\beta t) \big) \Big) \\ &= e^{-\alpha t} \Big( A_1 \big(cos(\beta t) + i \sin(\beta t) \big) + A_2 \big(\cos(\beta t) - i \sin(\beta t)\big)\Big) \\ &= e^{-\alpha t} \Big( (A_1 + A_2) \cos(\beta t) + i (A_1 - A_2) \sin(\beta t) \Big) \\ &= e^{-\alpha t} \Big( a_1 \cos(\beta t) + a_2 \sin(\beta t) \Big) \\ &= e^{-\alpha t} \Big( A \sin \phi \cos(\beta t) + A \cos \phi \sin(\beta t) \Big) \\ &= Ae^{-\alpha t} \Big(\sin \phi \cos(\beta t) + \cos \phi \sin(\beta t) \Big) \\ \end{align*}
In the last three lines, we've redefined the constants of integration a few times so that:
\begin{align*} a_1 &= A_1 + A_2 & , a_2 &= i(A_1 - A_2) \\ a_1 &= A \sin \phi & , a_2 &= A \cos \phi \end{align*}
And we finally use one of the trig identities we proved earlier to write the solution as: \$$ x(t) = A e^{-\alpha t} \sin(\beta t + \phi) \$$
When \$ b^2 - 4ac = 0 \$ , \$ r = -\frac{b}{2a} \$ is real and negative but our test solution is under determined. We'll instead propose a solution of the following type and test that it works: \begin{align*} && x(t) &= e^{rt}(A + Bt) \\ \Rightarrow && \dot x(t) &= re^{rt}(A + Bt) + Be^{rt} \\ && &= e^{rt}\big(r(A + Bt) + B)\big) \\ && &= e^{rt}(rA + B + Brt) \\ \Rightarrow &&\ddot x(t) &= re^{rt}(rA + B + Brt) + e^{rt}Br \\ && &= e^{rt}\big(r(rA + B + Brt) + Br\big) \\ && &= e^{rt}(Ar^2 + 2Br + Br^2t) \\ \end{align*}
Nous avons donc deux types de solutions complètement différents qui dépendent de trois paramètres \$ a, b, c \$. Pour voir comment ces paramètres affectent le graphique, imaginons qu'une de nos conditions initiales est \$ \phi = \frac{\pi}{2} \$ . Ça veut dire que:
\$ \begin{align*} & a_1 = A \sin \pi/2 = A & & a_2 = A \cos \pi/2 = 0 \\ \Rightarrow \qquad & A_1 + A_2 = A & & A_1 - A_2 = 0 \\ \Rightarrow \qquad & A_1 = A/2 & & A_2 = A/2 \end{align*} \$
Dans ce cas particulier, nous avons donc:
\$ \begin{equation*} x(t) = \left\{ \begin{array}{rl} A e^{-\alpha t} \dfrac{e^{\beta t} + e^{-\beta t}}{2} & \text{si } b^2 - 4ac > 0 ,\\ A e^{-\alpha t} \cos(\beta t) & \text{si } b^2 - 4ac < 0 ,\\ \end{array} \right. \end{equation*} \$
Geogebra