Stanford’s course on the Fourier Transform has, for the first 6 lectures, been almost entirely about Fourier Series.

Fourier Series can be used to represent any periodic phenomena. Phenomena = functions in math, signals = engineering.

Periodic phenomena are broadly classified as either periodic in space (e.g., a ring) or periodic in time (e.g., a wave). Periodicity arises from symmetry inherent in some property of the periodic phenomenon.

Prof Osgood is a genius. His enthusiasm is surpassed only by his insight into mathematics.

The Fourier series is based on linear combinations of sine and cosine. These are in turn based on the unit circle. Mentioning this relationship to the unit circle because it isn’t the only basis for trigonometric functions. For example, the Hyperbolic sine is based on a hyperbola.

Any periodic function can be expressed as a linear combination (a sum) of these trigonometric basis functions. The hard part is figuring out the coefficients to apply to each linear component of the sum.

For mathematical convenience, the exponential form of the sine and cosine are often used when deriving the coefficients of the Fourier series for a particular function. There’s a lot of calculus involved here but, at least up to lecture 7, it’s all pretty basic plug and chug integration and differentiation. Despite that, it makes me yearn for a refresher in things like Integration by Parts. Thank God for Wikipedia and Schaum’s Outlines!

Apparently the key breakthrough in the theoretical justification for Fourier Series occured when mathematicians gave up on trying to prove that the Fourier series converges **exactly** to the function being represented. Instead, they were able to prove that the mean squared error (difference between the value of the function and its Fourier series) of the Fourier series for a given function, in the limit, approaches zero.

Sine and Cosine are both continuous and smooth (infinitely differentiable). Because of this edges, which are either jump discontinuities or sharp changes in the derivative, require higher and higher frequency components to express. I visualize this as higher and higher frequency sinusoids “bunching up” together to produce a sharp change in the value of the function at any given point. The more such points in a function (e.g., the more edges), the more these high frequency sinusoids are needed to represent the function.

I believe it takes an infinite number of such high frequency components to perfectly reproduce any discontinuous function since the sum of a finite number of continuous functions is a continuous function.

In lecture 6 Prof Osgood formally bridges Fourier Transforms to Fourier Series. The Fourier Transform is a limiting case of Fourier Series. Limited in the sense that the phenomena need not be periodic (which means, mathematically, that the period tends to infinity).

## No comments:

## Post a Comment