Engineering Mathematics II MAP 436-4768 Spring 22 Fourier Integral Representations Basic Formulas and facts 1. If f(t) is a function without too many horrible discontinuities; technically if f(t) is decent enough so that b f(t) dt is defined (makes sense as a a Riemann integral, for example) for all finite intervals < a < b < and if (1) f(t) dt <, then the function C(ω) is defined for all real numbers ω by C(ω) = 1 f(t)e iωt dt. 2 We say C(ω) is the Fourier transform of f and we also denote it by F(f)(ω) or ˆf(ω). In this case, C(ω) is continuous and satisfies lim ω C(ω) = = lim ω C(ω). These last facts are moderately hard to verify. 2. Condition (1) is usually too restrictive, though great if one has it. It is a fact, much harder to verify, that if we have (2) f(t) 2 dt <, then one can define C(ω) = 1 R 2 lim f(t)e iωt dt. This time C(ω) need not be continuous, but one has otherwise a very nice situation. There is a symmetry between f and C. It turns out that one also has (3) and one can recover f mostly by C(ω) 2 dω <, R f(t) = lim C(ω)e iωt dω. Precisely: If f satisfies the Dirichlet condition in every finite interval, then R f(t) = lim C(ω)e iωt dω.
is true at every point of continuity of f. At jumps one has instead f(t+) + f(t ) 2 There is more in this case; one also has R = lim C(ω)e iωt dω. (4) C(ω) 2 dω = 1 2 f(t) 2 dt, 3. The not quite sine and cosine transforms of a function f(t) are defined by A(ω) = 1 B(ω) = 1 f(t) cos ωt dt, f(t) sin ωt dt. The conditions for their existence are the same as for C(ω). Notice A(ω) is even and B(ω) is odd. 4. The relations between A, B, C are similar to those among the Fourier coefficients: C(ω) = 1 (A(ω) ib(ω)). 2 It is also easy to see that f(t) = (A(ω) cos ωt + B(ω) sin ωt) dω, the equality interpreted as usual. Looking at the definitions of A, B we see that A(ω) if f is odd while B(ω) if f is even, so that for even functions we have for odd functions, A(ω) = 2 f(t) = B(ω) = 2 f(t) = f(t) cos ωt dt, A(ω) cos ωt dt; f(t) sin ωt dt, B(ω) sin ωt dt. 5. As we move into Section 9.3, things get more formal. Here we find the Fourier transform explicitly defined. It is basically C, but the notation is different and the 1/2 has been shifted around. If f(t) is defined for 2
< t < and satisfies any of the conditions mentioned above for the existence of C(ω), then the Fourier transform of f is defined by F(f)(ω) = F (ω) = ˆf(ω) = f(t)e iωt dt. This is just 2C(ω). However writing F, or better F(f) or ˆf for the Fourier transform of f allows us to work with several functions and their Fourier transforms at once; knowing all the time who is the Fourier transform of whom. Of course, if the original function is g(t), then its Fourier transform is denoted by any one of G(ω), F(g) or ĝ. Some people keep the variable longer and write things like F(f(t)), where I usually write F(f). You may choose any notation you prefer; the main thing to realize is that once you Fourier transform, the new variable is ω and t has ceased to exist. The one place where I ll use the t the way the text does is for concrete functions. For example, if f(t) = e t2, I might refer to its Fourier transform by F(e t2 ). I consider this a convenient abuse of language. The inverse Fourier transform of H is defined by F 1 (H)(t) = 1 2 H(ω)e iωt dω. The Fourier representation theorem can then be expressed by: f(t) = F 1 (F(f))(t). 6. The cosine transform is essentially what was called A(ω) in section 9.1. Let us recall that A(ω) = 1 f(t) cos ωt dt. As we saw today in class, every function f(t) defined for < t < can be expressed as the sum of an even and of an odd function: f(t) = f 1 (t) + f 2 (t), where f 1 (t) = 1 2 (f(t) + f( t)) is even and f 2(t) = 1 2 (f(t) f( t)) is odd. Putting this into the expression for A(ω) we get since A(ω) = 1 2 = 1 f 1 (t) cos ω dt + 1 f 1 (t) cos ω dt 1 f 2 (t) cos ωt dt = f 2 (t) cos ωt dt 3
because f 2 is odd and 1 f 1 (t) cos ωt dt = 2 f 1 (t) cos ωt dt because f 1 is even. The cosine transform wipes out the odd part of a function, so one might as well limit it to even functions. Better yet, we apply it to functions defined in the interval [, ) which we may wish to imagine as being part of an even function. Or not. To be precise: If f(t) is defined for t <, we define its cosine transform by F c (ω) = F c (f)(ω) = f(t) cos ωt dt. The original function is then the cosine transform of the cosine transform; that is, f(t) = f(t) cos ωt dω. The relation between (F ) and (F ) c is: If f(t) is even, then F (ω) = 2F c (ω). 7. Similar considerations lead to the sine transform. If f(t) is defined for t <, we define its sine transform by F s (ω) = F s (f)(ω) = f(t) sin ωt dt. The original function is then the sine transform of the sine transform; that is, f(t) = f(t) sin ωt dω. The relation between (F ) and (F ) c is: If f(t) is odd, then F (ω) = 2F s (ω). 8. In general, if f(t) is defined for < t <, the transforms are related as follow. Write f = f 1 + f 2, where f 1 is even and f 2 is odd. Then F (ω) = 2(F c (f 1 )(ω) + F s (f 2 )(ω)). 9. If f(t), g(t) are defined for < t <, then their convolution is the function f g(t) defined by f g(t) = f(t s)g(s) ds. The convolution does not make sense except if f(t), g(t) behave in a relatively nice way at infinity. For example, it does make sense if any of the following conditions hold: 4
(a) f(t) dt <, In this case one also has that f g(t) dt <. g(t) dt <. (b) f(t) 2 dt <, g(t) 2 dt <. In this case the convolution f g(t) is continuous and bounded. (c) If both f, g satisfy f(t) = if t < and g(t) = if t <. What happens now is that when we consider the integrand f(t s)g(s) of the convolution, it is zero for quite a few values of s. In the first place, if t < then one of f(t s), g(s) is always zero, and so is their product. In fact, if s <, then g(s) = ; if s, then t s t < so f(t s) =. The conclusion is that f g(t) =. On the other hand, if t >, then f(t s)g(s) = if s < or if t s < ; i.e., if s > t. Putting all this together gives us the following useful formula of the convolution of two functions f(t), g(t) that are for t < : { if t <, f g(t) = t f(t s)g(s) ds ift >. The convolution has the following properties: (a) f g = g f. (b) f (g + h) = f g + f h. (c) F(f g)(ω) = F (ω)g(ω). (d) F(fg)(ω) = 1 2 F G(ω) = 1 2 F (ω ξ)g(ξ) dξ. 1. An important Fourier transform. Let f(t) = e at2, where a > is a constant. We want to find F (ω). Unfortunately going into all the details is more than we can do here, so we ll try to accept some facts on faith. But you can look them up! Fact 1 is that We begin our computation. We have F (ω) = e x2 dx =. e at2 iωt dt. 5
The exponent in the integrand above can be rearranged as follows: We get from this that at 2 iωt = ( at + F (ω) = e ω2 4a iω 2 a )2 ω2 4a. e ( at+ iω 2 a )2 dt. Suppose we make the substitution x = at + iω 2. If this were legal, a we d have dx = a dt. The limits become sort of strange. It is a fact from complex analysis that the substitution is legal and the limits of integration can remain as,. That is, F (ω) = e ω2 4a e x2 In view of our first fact, we got: F (ω) = F(e at2 ) = dx a. a e ω 2 4a. 6