1 consider the regression model up 0txn nn n 1 2 n where ye r is an l
Search for question
Question
1. Consider the regression model
Уп
=
0Txn+nn
n = 1, 2, . . ., N,
where Є R¹ is an L × 1 vector, the noise samples n
Gaussian random vector with covariance matrix Σ. If X
y = [Y1, . . ., YN] is an N × 1 vector, show that
is an optimal estimate of 0.
Ꮎ
=
−1
=
=
[11,,N] come from a zero-mean
[x1,...,xN] is the input matrix and
(Χ Σ 'Χ) Χ Σ y
'n
Hint: You can follow the technique of lease square (LS) error introduced in Lecture 2 to solve this
problem and notice that Σ may not be a diagonal matrix.
n
2. (Simulation Problem of Noise Cancellation). The noise cancellation application is illustrated in the
following figure:
Sn + V1 (n)
en
=
+
dn
= Sn + V1 (n)
Noise
Canceller
ân
v2(n)
Figure 1: A block diagram for a noise canceler.
The signal of interest is a realization of a process sn, which is contaminated by the noise process
vi(n). For example, s may be the speech signal of the pilot in the cockpit and v₁ (n) the aircraft
noise at the location of the microphone. We assume that v₁ (n) is expressed as
v₁(n) = a₁v₁(n − 1) + Nn,
where a₁ is a constant. The random signal v₂ (n) is a noise sequence which is related to v₁ (n), but it
is statistically independent of s. We assume v2 (n) can be expressed as
v2(n) = a2v2(n − 1) + Nn⋅
Note that both v₁ (n) and v₂(n) are generated by the same noise source, n(n), which is assumed to
be zero-mean (white) Gaussian random variable with variance σ27. The inputs to the noise canceler
is v₂(n), whereas the output of the noise canceler is modeled as
n°
Ân = w0v2(n) + w1v2(n − 1) = w³v2(n),
2 -
where w = [wo w₁] and v₂(n) = [v2(n) v₂(n − 1)]. The goal here is to compute the weights of
the noise canceler (i.e., wo and w₁) in order to optimally remove (in the MSE sense) the noise v₁ (n)
from sn + v₁(n).
The optimal w can be found by using the following normal equations:
1-a1a2
[雷][六]
1-a1a2
According to the following steps, use Matlab (or any other programming languages) to solve the
problem of noise cancellation:
= 0.001.
(a) Create 5000 data samples of the signal s = cos(2, fon), for fo
(b) Create 5000 data samples of v₁(n) = a₁v₁(n−1)+ (initializing at zero), where nn represents
zero mean Gaussian noise with variance σ = 0.0025 and a1
(c) Adding sn and v₁ (n) to obtain dn
signal.
=
= 0.8.
Snv₁(n) and plot dn. This represents the contaminated
(d) Create 5000 data samples of v2(n) = a2v2(n−1)+N (initializing at zero), where ŋ represents
the same noise sequence obtained in (b) and a2
=
0.75.
(e) Find for the optimal w using the above normal equations. Create the sequence of the restored
signal sn dn-wov₂(n) w₁v2(n - 1) and plot the result.
=
(f) Repeat steps (b)-(e) using a₂
=
0.9, 0.8, 0.7, 0.6, 0.5. Comment on the results.
Hint: The plots you generate would be similar to the plots in Fig. 4.14 of the textbook.
3. (Simulation Problem of Linear Regression) Consider the following cost function
J (0) = E [ ( y − x ³ 0) ²³] = o² – 20ªp+0ªΣ„0,
-
-
where 0 = [01 02]T, Σx = E[xx²] is the input covariance matrix, and p = E[xy] is the input-output
cross-correlation vector. Suppose p [0.05 0.03] and consider the following two covariance
=
matrices:
1
Σ1
= [ 8 ]
0
0.1
and Σ2 = [11]
10
Now do the following problems:
(a) Compute the corresponding optimal solutions, i.e., 0† = Σ₁¹p and 0 = Σ2¹p.
(b) Apply the following gradient descent algorithm
8(i) (i-1) + µ (p − Σ₂ð (i−1))
μ
μ
-
to estimate 02. Set the step size µ equal to (i) its optimal value μo and (ii) equal to μo/2. For
these two choices of µ, plot the error e(²) = ||0 (2) – 02 ||² at each iteration step i. Compare the
convergence speeds of these two choices of μ.
3 (c) Plot the coefficients of the successive estimates, (i), for both step sizes, together with the
isovalue contours of the cost function, like Fig. 5.9. What do you observe regarding the
trajectory towards the minimum?
Hint: You may need to use the Matlab function contour(·, ·, ·) to plot the figures.
(d) Apply the above gradient descent algorithm in (b) to find ¤† using Σ₁ and p. Use the optimal
step size μo to plot e() = ||ə (i) — O† ||2² in the same figure of e(). Compare the convergence
speeds and comment on them.