This post will provide a brief introduction to the geometrical representation
of signals. After some background material in linear algebra, the concepts of analysis and synthesis will be explained and the idea of convolution will be defined.
NOTE: a print-friendly (and more complete) version of this post can be found here.
Vector spaces
Discrete signals of length
can be thought as a multidimensional vectors
in or
. As such, they can be added with other vectors and multiplied by scalars as follows:
(1)
where


Equation 1 is called a linear combination and a set of
vectors that can be combined in such way forms a vector space.
Several metrics can be computed on vector spaces:
- mean:
;
- energy:
;
- power:
;
-
-norm:
.
The latter metric, the norm, is often indicated as and
represents the length of the vector in a space. When a vector space has a defined norm, it is called a Banach space. The -norm is said
to be contractive, that is:
(2)
and it can be generalized to any order:

Inner product spaces
A very important operation, called inner product, can defined on a Banach space as follows:
(3)
where


(4)
The inner product has several important properties, among which:
Cauchy-Schwarz inequality
.
Two vectors are said to be orthogonal (indicated as ) if their inner product is zero:
(5)
One of the most important applications of inner product is to project one vector over another. The projection of on
is defined as:
(6)
where the ratio between the inner product and the squared norm of

Under specific conditions, a vector can be reconstructed from some projections by summing them.
It is important to remark that this reconstruction only works if the vectors on which the projections are done are pairwise orthogonal; in case the used vectors for projections are linearly independent but not orthogonal, they can be orthogonalized by a method called Gram-Schmidt orthogonalization.
The subspace covered by all linear combinations of a set of vectors is called span. If the set of vectors are linearly independent than the span is called basis of the vector space. It is easy to show that in a space in
there are
vectors in the basis for that space. Clearly, a vector can be reconstructed with a linear combination from its projections on another set of vectors if and only if the set used is a basis.
Analysis and synthesis
Previous sections showed that a vector in vector space can be written as a linear combination of a basis for that space, by multiplying it by some constants and summing the products.
It is therefore possible to define an analysis as the estimation of the constants and a synthesis as the linear combination equation that recover the signal.
The analysis is the representation of a signal given by the inner product of it by a basis in a vector space; it is therefore given by the projection:
(7)
where


The synthesis is the reconstruction of the original signal by the summation of the products with the representation
created by the analysis:
(8)
The Fourier representation is interpretable in the following context as a specific case of analysis and synthesis, where the basis is given by a set of complex sinusouids: (where
is the imaginary unit).
The discrete Fourier analysis (DFT) will be therefore:
(9) 
and, in the same way, the reconstruction (or inverse Fourier transform, IDFT) is given by:
(10) 
The basis made of complex sinusoids is only a possible basis among infinite. This basis focus on representig correctly frequencies and is therefore well localized in frequency but is not localized in time. On the other hand, it is possible to create a basis made of Dirac’s pulses that will provide perfect localization in time but no localization in frequency.
A compromise between sinusoids and impulses is given by bases made of oscillating signals with a temporal limitation, such as wavelets. A wavelet is a bandpass filter centered on a specific frequency with a specific bandwidth that has therefore a localization both in time and frequency.
The Gabor wavelet represent the best compromise, in term of Heisenberg uncertainty principle, between time and frequency. More information on this vast subject can be found elsewhere online.
Convolution
Convolution is a mathematical operation defined in vector spaces that has important applications in signals theory and it can be defined in term of inner product. In general it is possible to say that the inner product between two vectors is:
- the projection of a vector onto the other as discussed in previous sections (or, in other
- words, the product of magnitudes scaled by the angle between the vectors);
- the sum of the elements of a vector, weighted by the elements of the other;
- the calculation of the similarity (covariance) of the two vectors.
The convolution then, can be defined as an inner product that is repeated over time:
(11)
where


- the time series given by a signal weighted by another that slides along;
- the cross-variance between two signals (similarity in time);
- the time series given the mapping between to signals;
- a filtering process.
There is an important relation between the DFT and convolution: the convolution in time. domain between to signals is equal to the product of the DFT of them. Formally:
(12)
where
