watch: Steven | Fourier Analysis

Table of contents

Overview

Source video: Fourier Analysis: Overview

Most major breakthrough started with coordinate transformation. (e.g., Theory of relativity)

Fourier transform is another coordinate transformation

Fourier derived Fourier series as a way of approximating the solutions of partial differential equations, e.g., Heat equations.

A rectangular metal has a temperature distribution u(x,y,t) in 2 dimensions, which is governed by the heat equation:

$$ \frac{∂u}{∂t} = α (\frac{∂²u}{∂x²} + \frac{∂²u}{∂y²}) = α ∇²u $$

Fourier Transform diagonalize the laplacian operator in the heat equation. That means the laplacian operator has eigen values and eigen functions, just like other linear operators do.

  • The eigen functions are sines and cosines with a particular frequency determined by the boundary conditions and geometry of this object.
  • The eigen values are those spatial frequencies.

An arbitrary function can be approximated by a sum of sines and cosines of increasing frequencies

f s ( i x n ) ( w t )

These sines and cosines form an orthogonal basis for the space of possible functions. (读作“一个基”,而不是“一组基”) For example, in a vector space, the x-axis and y-axis form the basis for 2-D vector space.

Fast Fourier Transform (FFT) computes Fourier series efficiently to analyze and process data.


Fourier Series-1

Source video: Fourier Series: Part 1

  • (2023-11-19) Series means a list of number, where the meaning of each term is implied. Thus in Fourier Series, each term stands for a basis, which is a trigonometric functions of a increasingly high frequence.

    While Fourier Transform is a function that gives an aribitrary number in the “series”.

Fourier series approximates arbitrary functions f(x) as a infinite sum of sines and cosines of increasingly high frequency.

  • (2023-11-19) For 1D signal, sine (or cosine) is enough. For 2D signal, sine and cosine are both needed as they’re orthogonal.

Considering f(x) is 2π-periodic:

f ( x ) = - π π

The function f(x) can be represented as a sum from k=1 to infinity of 2π-periodic sines and cosines of increasingly high frequency:

$$ f(x) = A₀/2 + ∑ₖ₌₁^∞ (Aₖcos(kx) + Bₖsin(kx) ) $$

where

  • Aₖ, Bₖ are Fourier coefficients, which indicate how much of each sine and cosine is needed to add up to recover f(x);

  • k stands for k-th frequency.

  • When k=0, cos0=1, sin0=0, so there’s a constant A₀

  • These sine and cosine waves are 2π periodic.

Coefficients are inner products

These Fourier coefficients can be computed as inner products (in Hilbert space) of the function f(x) with particular wave:

$$ Aₖ = 1/π ∫_{-π}^π f(x) cos(kx) dx\\\ Bₖ = 1/π ∫_{-π}^π f(x) sin(kx) dx $$

More explicitly, Aₖ, Bₖ are the inner products of f(x) with the k-th wave function normalized by the magnitude of (co)sine function

doubt: Isn’t the magnitude should be square-rooted? Why is it divided by the norm squared?

  • (2023-07-11) 傅里叶系数的归一化除以了两个基向量的模长,其中一个来自“求傅里叶系数”时,f 要与 单位 基向量做点积, 第二个是在“重建”信号时,傅里叶系数要与 单位 基向量相乘以得到某一方向上的分量。
$$ Aₖ = \frac{1}{‖cos(kx)‖²} ⋅ ⟨ f(x), cos(kx) ⟩ \\\ \ \\\ Bₖ = \frac{1}{‖sin(kx)‖²} ⋅ ⟨ f(x), sin(kx) ⟩ $$
  • When the function f(x) is projected into the direction cos(kx), the unit length of that direction is needed, so the norm of cos(kx) is divided.

  • ‖cos(kx)‖² can be computed as the inner product of cos(kx) with itself, which is π.

Projection & reconstruction

𝐮 ` ` ` ` ` 𝐲 , 𝐟 , , , , 𝐯 𝐱

A test vector 𝐟 can be represented separately in two sets of orthogonal basis (2D vector space, 2D coordinate system: 𝐱-𝐲 and 𝐮-𝐯):

$$ 𝐟 = ⟨𝐟, 𝐱⟩ ⋅ \frac{𝐱}{‖𝐱‖²}\ + ⟨𝐟, 𝐱⟩ ⋅ \frac{𝐲}{‖𝐲‖²} \\\ \ \\\ 𝐟 = ⟨𝐟, 𝐮⟩ ⋅ \frac{𝐮}{‖𝐮‖²}\ + ⟨𝐟, 𝐯⟩ ⋅ \frac{𝐯}{‖𝐯‖²} $$

The definition of Fourier Series is the same as projecting the vector 𝐟 on an orthogonal basis in a 2D vector space.

The coefficients are the projections of 𝐟 in each basis direction (i.e., inner products), and then they’re multiplied with the unit vectors, and add them up.

That means, in Fourier Series, the sines and cosines are the orthogonal basis functions. Coefficients indicate how much 𝐟 is in each direction.

Fourier Series-2

Source video: Fourier Series: Part 2

Arbitrary period L

Generalize the period from (-π, π) to (0, L)

$$ f(x) = \frac{A₀}{2} \ + ∑ₖ₌₁^∞ (Aₖcos( \frac{2πkx}{L} ) + Bₖsin( \frac{2πkx}{L} ) ) $$

where:

  • cos( 2πkx/L ) and sin( 2πkx/L ) are periodic between 0 and L.

  • That is when x=0, 2πkx/L=0, and when x=L, 2πkx/L=2πk, i.e., the wave returns to the start.

  • These cos( 2πkx/L ) and sin( 2πkx/L ) are orthogonal basis for Hilbert space of function f.

The Fourier coefficients change their bounds from (-π, π) to (0, L):

$$ Aₖ = \frac{2}{L} ∫_0^L f(x) cos(2πkx/L) dx\\\ Bₖ = \frac{2}{L} ∫_0^L f(x) sin(2πkx/L) dx $$

Approximation is periodic

The approximation $\\^f(x)$ is L-periodic as follows, because the f(x) is defined from 0 to L and all sines and cosines are L-periodic

- L 0 L 2 L

The Fourier approximation $\\^f(x)$ is periodic just repeating forever the pattern of the target function $f(x)$, which is defined on an interval.


Inner Products in Hilbert Space

Source video: Inner Products in Hilbert Space

Inner products of functions

It’s consistent with the definition of inner products of vectors.

Given two functions f(x) and g(x),

f g : : x a f g : : x f : g : : : x f : g : : : : x b f g ( ( x x x ) )

The inner product between these two functions is defined as:

$$ ⟨f(x), g(x)⟩ = ∫ₐᵇ f(x) g(x) dx $$

If they’re complex-valued functions (like $e^{iwx}$), then this inner product would use the complex conjugate (same real part, opposite imaginary part) of g(x): $⟨f(x), g(x)⟩ = ∫ₐᵇ f(x) \bar g(x) dx$

This inner product indicates how similar these two functions are, just like the inner product of vectors.

By sampling these two functions at n evenly distributed x locations, a function is represented by a data vector.

The interval between 2 sampling x is Δx = (b-a)/(n-1).

As n increases, more points are sampled and the interval is getting infinitely small, the continuous function can be recovered by a series of points.

$$ \underline 𝐟 = \begin{bmatrix} f₁ \\\ f₂ \\\ ⋮ \\\ fₙ \end{bmatrix} ; \quad \underline 𝐠 = \begin{bmatrix} g₁ \\\ g₂ \\\ ⋮ \\\ gₙ \end{bmatrix} $$

Compute the inner product of these two vectors:

$$ ⟨\underline 𝐟, \underline 𝐠⟩ = \underline 𝐠ᵀ ⋅ \underline 𝐟 \ = ∑ₖ₌₁ⁿ fₖ gₖ $$
  • where 𝐠ᵀ is a row vector, 𝐟 is a column vector.

  • If data is complex-valued, 𝐠ᵀ would be complex conjugate transpose 𝐠* and gₖ will be complex conjugate $\bar gₖ$:

    $⟨\underline 𝐟, \underline 𝐠⟩ = \underline 𝐠^* ⋅ \underline 𝐟 = ∑ₖ₌₁ⁿ fₖ \bar gₖ$

If the number of samples (resolution) is doubled, this sum will get twice as large, which is not correct because the “similarity” of two functions doesn’t change.

So the inner product should be normalized by Δx. (Expectation)

$$ ⟨\underline 𝐟, \underline 𝐠⟩ Δx = ∑ₖ₌₁ⁿ f(xₖ) \bar g(xₖ) Δx $$

The righ-hand term is the Riemann approximation of the continuous integral.

If take the limit as Δx goes to 0, the resolution becomes infinitely fine corresponding to infinitely tall vectors, and the Riemann approximation becomes continuous integral:

$$ ⟨f(x), g(x)⟩ = ∫ₐᵇ f(x) \bar g(x) dx $$

再往后是先看一遍,然后默写

(2023-06-27)

Complex Fourier Series

Source video: Complex Fourier Series

Fourier series uses a infinite sum of sines and cosines functions of increasingly high frequencies to approximate an arbitrary periodic function.

If the function is complex-valued, the Fourier series should be reformulated to accomondate imaginary numbers.

By leveraging to Euler’s formula $e^{ikx} = cos(kx) + i sin(kx)$, the complex Fourier series can be written as:

$$ f (x) = ∑_{-∞}^{+∞} Cₖ eⁱᵏˣ \\\ \ = ∑_{-∞}^{+∞} (α+βi) (cos(kx) + i sin(kx)) $$

where Cₖ is a complex coefficient.

Real number is a subset of complex number, so this formula also adapt to read-valued functions, with subjecting to a constraint

if f(x) is real-valued, $Cₖ = \bar C₋ₖ$.

This can be derived by expanding and kill all the imaginary terms:

$$ f(x) = ∑_{-∞}^{+∞} (α+βi) (cos(kx) + i sin(kx)) \\\ \ = ∑_{-∞}^{+∞} αcos(kx) + \cancel{ αi sin(kx) } + \cancel{ βicos(kx)} - βsin(kx) \\\ \ = ∑_{-∞}^{+∞} αcos(kx) - βsin(kx) \\\ \ = ∑_{-∞}^{+∞} (α-βi) (cos(-kx) + i sin(-kx) ) \\\ $$

eⁱᵏˣ are Orthogonal basis

eⁱᵏˣ with taking different integer k stands for different basis function. And these basis functions are orthogonal, which are unique directions and form an infinite-dimensional function space.

Orthogonal means the inner products against each other are 0, but the inner product with itself is its norm squared.

Let eⁱᵏˣ be defined as Ψₖ (eⁱᵏˣ ≔ Ψₖ), then the inner product in the Hilbert space of any two basis functions can be computed as an integral (the period is [-π,π]):

$$ ⟨Ψⱼ, Ψₖ⟩ = ∫_{-π}^π Ψⱼ ⋅ \bar Ψₖ dx \\\ \ = ∫_{-π}^π eⁱʲˣ ⋅ e⁻ⁱᵏˣ dx \\\ \ = ∫_{-π}^π eⁱ⁽ʲ⁻ᵏ⁾ˣ dx \\\ \ = \frac{1}{i(j-k)} ⋅ [eⁱ⁽ʲ⁻ᵏ⁾ˣ ]_{-π}^π $$

if (k-j) is non-zero, but an integar d, and because eⁱᵈ$^π$ = eⁱᵈ$^{-π}$, the inner product (definite integral) is 0.

While if (k-j)=0, the inner product is 0/0:

$$ \frac{ [eⁱ⁽ʲ⁻ᵏ⁾ˣ]_{-π}^π }{ i(j-k) } = \frac{0}{0} $$

According to Loptial’s Rule, " the limit of the ratio of two functions is the same after we take the derivative of each function."

So take the derivative of numerator and denominator w.r.t. (j-k):

$$ \begin{aligned} & lim_{(j-k)→0} \frac{ [eⁱ⁽ʲ⁻ᵏ⁾ˣ ]\_{-π}^π }{ i(j-k) } \\\ &= lim_{(j-k)→0} \frac{e^{i(j-k)π} - e^{i(j-k)(-π)} }{i(j-k)} \\\ &= lim_{(j-k)→0} \frac{ iπ e^{i(j-k)π} - (-πi) e^{i(j-k)(-π)}}{i} \\\ &= 2π \end{aligned} $$

Actually, it’s not necessary to take this limit.

Since $⟨Ψⱼ, Ψₖ⟩ = ∫_{-π}^π eⁱ⁽ʲ⁻ᵏ⁾ˣ dx$, this integral = 2π when (j-k) = 0.

Therefore, the inner product is

$$ ⟨Ψⱼ, Ψₖ⟩ = \frac{1}{i(j-k)} ⋅ eⁱ⁽ʲ⁻ᵏ⁾ˣ |_{-π}^π \\\ \ = \begin{cases} 0 & \text{if j $\neq$ k} \\\ 2π & \text{if j=k} \end{cases} $$

The norm squared of a basis function Ψⱼ (‖Ψⱼ‖²) is 2π.

Analogy to regular 2D vector space above, a function can be written as a sum of projections that the function is projected onto each basis function:

$$ 𝐟(x) = ∑_{-∞}^{+∞} Cₖ eⁱᵏˣ = ∑_{k=-∞}^{+∞} \frac{⟨𝐟(x), Ψₖ⟩}{2π} ⋅Ψₖ $$

doubt: Is Ψₖ unit length? Otherwise, Ψₖ needs normalization?, because the 1/2π is for energy measurement instead of getting the unit length of Ψₖ?

The intuitive application of Fourier series is that: given a function, the approximated version of it can be obtained by truncating the summation of basis direction weighted by coefficients. And the coefficients is the projection (inner product) of the function onto each basis.


(2023-06-28)

Fourier Transform

Double integral for x then for w

还是内积(投影),x的范围是负无穷到正无穷,投影到一个特定频率 w 的基函数上,得到 Fourier coefficient,这就是 Fourier Transform

Fourier Series’s counterpart is inverse Fourier Transform

Built with Hugo
Theme Stack designed by Jimmy