Eigenvalue decomposition helps reduce noise by retaining only the dominant (larger) eigenvalues in the eigenvalue matrix while discarding the smaller ones. It decomposes the covariance matrix of the original data using the Principal Component Analysis (PCA) method. The eigenvectors form an orthogonal basis, and the corresponding eigenvalues indicate how much variance (signal) is captured along each eigenvector direction.
In signal processing and data science, noise reduction is critical for improving the quality of data. One effective technique for this is Eigenvalue Decomposition (EVD) applied to the covariance matrix of the dataset.
Step-by-Step: Noise Reduction with Eigenvalue Decomposition
Let’s say you have a dataset represented by a covariance matrix \( C \). Here’s the process mathematically:
1. Perform Eigenvalue Decomposition on \( C \):
\[ C = V \Lambda V^T \]
Where:
- \( V \) contains the eigenvectors (principal components).
- \( \Lambda \) is a diagonal matrix of eigenvalues.
2. Remove Noise
Identify and discard the eigenvectors associated with small eigenvalues. These components typically represent noise because they account for very little variance in the data.
3. Reconstruct the Denoised Covariance Matrix
Reconstruct the cleaned (denoised) version of the covariance matrix using only the top \( k \) eigenvectors and eigenvalues:
\[ C_{\text{reduced}} = V_{\text{reduced}} \Lambda_{\text{reduced}} V_{\text{reduced}}^T \]
This reconstruction preserves the directions of highest variance (signal) while filtering out the less significant components (noise).
Eigenvalue decomposition of a Hermitian (or symmetric) covariance matrix is a powerful technique for noise reduction.
By analyzing the eigenvalues and retaining only the components with significant variance, we can effectively reduce noise while preserving essential data structure.
Matrix Example
Given a symmetric covariance matrix:
\[ C = \begin{bmatrix} 4 & 2 \\ 2 & 3 \end{bmatrix} \]
Its eigenvalues are approximately \( \lambda_1 = 5.561 \), \( \lambda_2 = 1.438 \), and the corresponding eigenvectors form matrix \( V \).
We reduce noise by zeroing out the smaller eigenvalue:
\[ \Lambda_{\text{reduced}} = \begin{bmatrix} 5.561 & 0 \\ 0 & 0 \end{bmatrix} \]
Reconstruct the denoised matrix:
\[ C_{\text{reduced}} = V \Lambda_{\text{reduced}} V^T \approx \begin{bmatrix} 3.458 & 2.693 \\ 2.693 & 2.104 \end{bmatrix} \]
This retains the main structure (signal) and removes low-variance noise.
MATLAB Code
% Step 1: Generate a clean sine wave signal
n = 1000; % Number of time samples
t = linspace(0, 2*pi, n)';
freq = 3; % Frequency in Hz
X_clean = sin(freq * t); % Clean 1D sine wave
% Create a multi-dimensional version (e.g., repeat with phase shifts)
X = [X_clean, sin(freq * t + pi/4), sin(freq * t + pi/2)];
% Step 2: Add Gaussian noise to the signal
noise = 0.3 * randn(size(X));
X_noisy = X + noise;
% Step 3: Compute the covariance matrix of the noisy signal
C_noisy = cov(X_noisy);
% Step 4: Perform eigenvalue decomposition
[V, D] = eig(C_noisy);
% Step 5: Sort eigenvalues and eigenvectors in descending order
[eigenvalues, idx] = sort(diag(D), 'descend');
V_sorted = V(:, idx);
% Step 6: Retain top-k components (e.g., k = 2)
k = 2;
V_reduced = V_sorted(:, 1:k);
% Step 7: Project the noisy data onto the reduced eigen-space
X_projected = X_noisy * V_reduced;
% Step 8: Reconstruct (denoise) the data from the reduced components
X_denoised = X_projected * V_reduced';
% Step 9: Plot results
figure;
subplot(3, 1, 1);
plot(t, X(:, 1), 'b'); title('Original Sine Wave (1st Dimension)');
xlabel('Time'); ylabel('Amplitude');
subplot(3, 1, 2);
plot(t, X_noisy(:, 1), 'r'); title('Noisy Sine Wave (1st Dimension)');
xlabel('Time'); ylabel('Amplitude');
subplot(3, 1, 3);
plot(t, X_denoised(:, 1), 'g'); title('Denoised Sine Wave (1st Dimension)');
xlabel('Time'); ylabel('Amplitude');