Blaschke products in signal processing

Leave a comment

An all-pass filter has a constant gain over all frequencies. For every pole, the filter has a conjugate-reciprocal copy in a zero. An all-pass filter may have several conjugate-reciprocal pole-zero pairs. The general form of the transfer function for such a filter is
H(z) = A \prod\limits_{k=1}^{n}\left(\frac{z - c_k}{1 - c^*_kz} \right)
where A is a constant. The above expression is a form of a more general class known as Blashcke (pron. Bloss Kee) products, named after the Austrian mathematician Wilhelm Blaschke. The all-pass filter of order n as above is a finite Blashcke product. Finite Blashcke products were around for a while (but not addressed so) even before Blashcke proposed their infinite counterparts in 1915.


Closely associated with the all-pass filters are Blashcke matrices which are of the form \mathbf{B}(z) \mathbf{B}^*(z^{-1}) = \mathbf{I}, such that \mathbf{B}(z) has no poles within the unit circle and \mathbf{B}^*(z^{-1}) is the conjugate transpose. This is similar to the factorization of an all-pass filter [1].


Blashcke products are very useful in digital filter design. A recent book [1] has more information on other applications of Blashcke products.

References:
[1] Mandic D. P. and Goh V. S. L., “Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models,” Wiley, 2009, p.146-147.
[2] Mashreghi J. and Fricain E., “Blaschke Products and Their Applications,” Springer, 2013.

Advertisements

Bohr (not Niels) revisited

Leave a comment

Danish physicist Niels Bohr is not an unknown scientist to a typical passionate high school physics student. However, until my graduate school studies, I didn’t learn much about the work of his brother Harald Bohr, who was a mathematician. Both did pioneering work in sciences but Niels won the Nobel. Both were passionate footballers but Harald played in Olympics. Their father Christian – a professor of physiology – remarked that Harald was brilliant but Niels was special.

I learned about Harald’s work in my functional analysis class. By Zorn’s lemma, every Hilbert space admits an orthonormal basis (ONB). However, only separable Hilbert spaces have countable ONB. The ONB for inseparable Hilbert spaces can be constructed by following the principle of transfinite induction with the invocation of the Axiom of Choice/Zorn’s Lemma. One of the examples of ONBs of inseparable Hilbert space is the (Harald) Bohr basis – the almost periodic functions.

A few good problems from an old book

Leave a comment

In Fall 2000, I was introduced to the signals and systems through a less popular textbook: An Introduction to the Principles of Communication Theory by J. C. Hancock (1961). My fellow undergraduate students used to tremble at its very sight. The book was laconic in explanations and parsimonious in examples. However, it was (and continue to be) a universally used textbook for undergraduate signals and systems course in several universities. I personally believe that this book should be replaced by more recent classic textbooks in this area since it is highly likely that an uninitiated student will further cloud than clear his understanding after the first reading of this book. Indeed, the book’s review that appeared in 1962 in IRE Transactions On Information Theory was an unfavorable one.

That said, the book is very useful as an interesting handbook on communication theory. It packs signals and systems, communication theory, analog electronics, random variables, probability, detection theory and more all in 253 pages – taking up the award of ingenious technical brevity. It also contains some of the most interesting exercises at the end of each chapter. I have revisited them time and again to verify my evolving comprehension of the subject. In one of my more recent regurgitation of this text, I came across two interesting problems, both from Chapter III: Random Signal Theory. The first problem[1] deals with the probability of random variables. It gives probability density functions of two statistically independent random variables X and Y and asks for the probability that a sample value of x(t) exceeds a sample value of y(t). We are given (notation is borrowed from Hancock’s book),

p(x) = 2ae^{-bx}, 0 \leq x \leq \infty, and

p(y) = ae^{-b|y|}, -\infty \leq y \leq \infty

Since X and Y are statistically independent, we have,

p(x, y) = p(x)p(y), 0 \leq x \leq \infty, 0 \leq y \leq \infty,
where the support of Y has been changed since p(x) = 0 for x < 0.

Now, P(X>Y) = P(X-Y>0) = 1 - P(X-Y \leq 0)

\Rightarrow P(X>Y) = 1- {\int\int_{x'-y' \leq 0} p(x,y)dxdy}

= 1- {\int_0^{\infty}\int_{0}^{y} p(x,y)dxdy}

= 1- {\int_0^{\infty}\int_{0}^{y} p(x)p(y)dxdy}

= 1- {\int_0^{\infty}(\int_{0}^{y} 2ae^{-bx}dx) p(y)dy}

= 1- {\int_0^{\infty}( \frac{2a}{-b}e^{-bx}|_{0}^{y}) p(y)dy}

= 1 - \frac{2a}{-b}{\int_0^{\infty}  (e^{-by} - 1) . ae^{-by} dy}

= 1 + \frac{2a^2}{b}{\int_0^{\infty}  (e^{-2by} - e^{-by}) dy}

= 1 + \frac{2a^2}{b} {\frac{1}{-2b} (e^{-2by} - 2e^{-by})|_{0}^{\infty}}

= 1 - \frac{a^2}{b^2} (0 - 1 - (0 - 2))

\Rightarrow P(X>Y) = 1 - \frac{a^2}{b^2}

The second problem[2] deals with finding the spectral density of a function from its time domain representation. Although the equation of the time domain function is not given, it can be deduced from the diagram that the function is a rectified sine wave. If the period of the sine wave is T, then that of rectified sine wave is \frac{T}{2}. So,

f(t) = |\sin(\frac{2\pi}{2T} t)| = |\sin(\frac{\pi t}{T})|

For a deterministic periodic function f(t) with period \frac{T}{2}, the spectral density G(f) is given by,

G(f) = \lim_{T \rightarrow \infty}\frac{|F_T(f)|^2}{T}

where F_T(f) is the Fourier Transform of f(t). Here,

… and I am still working on posting the entire solution.

References:
[1] Hancock J. C., “An introduction to the principles of communication theory,” McGraw-Hill Book Company, 1961, Problem 3-16.
[2] Hancock J. C., “An introduction to the principles of communication theory,” McGraw-Hill Book Company, 1961, Problem 3-27.

An estimation theory problem involving Gamma probability density function

Leave a comment

Most of the available text in estimation theory frequently harps on Gaussian probability density function which, otherwise an excellent mathematical device, often leads the reader not to explore properties of other pdfs. In Spring 2008, during my ECE 652 (Estimation and Filtering Theory) class, I came across this beautiful problem [1] which uses Gamma pdf in a classic estimation theory question. My solution for the problem goes like this ( code for the following courtesy CodeCogs):

[2],

[3].


References:
[1] Mendel J. M., “Lessons in estimation theory for signal processing, communications and control,” Prentice Hall, 1995, Problem 13-7.
[2] Integral of exponential functions.
[3] Gamma distribution.