Skip to main content

What’s happening inside a Convolutional Neural Network (CNN)?


In general, a Convolutional Neural Network (CNN) consists of an input layer, hidden layers, and an output layer. Real-world CNNs are nonlinear because they include activation functions that introduce nonlinearity.

In our case, the model takes a single input, which is passed to a 13-dimensional linear layer, followed by a nonlinear 13-dimensional tanh activation layer. The output from this layer is then passed through another 13-dimensional linear layer that produces a single output value.

Linear layers contain weights and biases, following the equation y = mx + b, while the tanh activation function outputs values in the range [-1, 1].


1. What’s happening overall

The text explains how PyTorch stores and updates the weights and biases (called parameters) of a small neural network built using nn.Sequential.

A “parameter” is just a number that the model can learn, like weights and biases.


2. model.parameters()

When you call model.parameters(), PyTorch collects all the weights and biases from every layer in your model.


[param.shape for param in seq_model.parameters()]
# Output:
[torch.Size([13, 1]), torch.Size([13]), torch.Size([1, 13]), torch.Size([1])]
  

This means:

  • First layer weights: [13, 1]
  • First layer bias: [13]
  • Second layer weights: [1, 13]
  • Second layer bias: [1]

These are the exact numbers the optimizer (like SGD or Adam) will update during training.


3. After backward()

When you run loss.backward(), PyTorch calculates how each parameter should change (the gradient).

  1. Compute loss
  2. Call loss.backward() → get gradients
  3. Call optimizer.step() → update weights

4. named_parameters()

This function gives you the names of the parameters along with their values.


for name, param in seq_model.named_parameters():
    print(name, param.shape)

# Output:
0.weight torch.Size([13, 1])
0.bias torch.Size([13])
2.weight torch.Size([1, 13])
2.bias torch.Size([1])
  

Here 0 and 2 are the layer order numbers inside nn.Sequential.


5. Using OrderedDict for readable names


from collections import OrderedDict
seq_model = nn.Sequential(OrderedDict([
    ('hidden_linear', nn.Linear(1, 8)),
    ('hidden_activation', nn.Tanh()),
    ('output_linear', nn.Linear(8, 1))
]))
  

Now the parameters look more descriptive:


hidden_linear.weight torch.Size([8, 1])
hidden_linear.bias torch.Size([8])
output_linear.weight torch.Size([1, 8])
output_linear.bias torch.Size([1])

  

6. Accessing specific parameters


seq_model.output_linear.bias

# Output:
Parameter containing:
tensor([-0.0173], requires_grad=True)
  

This means it’s a bias value that will be updated during training.


7. Checking gradients

After training, you can see how much each parameter changed by checking its .grad value:


seq_model.hidden_linear.weight.grad
  

This shows how each weight in the hidden layer changed after training.


Summary Table

Concept Meaning
parameters() Collects all weights and biases of the model
named_parameters() Same, but includes names for easier identification
loss.backward() Calculates gradients (how much each parameter should change)
optimizer.step() Updates all parameters using those gradients
OrderedDict Lets you name your layers instead of using numbers
.grad Shows the gradient of a parameter after backpropagation

In short: PyTorch tracks every learnable weight and bias in your model, computes their gradients when you train, and updates them using the optimizer to make the model perform better.


Further Reading


People are good at skipping over material they already know!

View Related Topics to







Contact Us

Name

Email *

Message *

Popular Posts

BER vs SNR for M-ary QAM, M-ary PSK, QPSK, BPSK, ...

๐Ÿ“˜ Overview of BER and SNR ๐Ÿงฎ Online Simulator for BER calculation of m-ary QAM and m-ary PSK ๐Ÿงฎ MATLAB Code for BER calculation of M-ary QAM, M-ary PSK, QPSK, BPSK, ... ๐Ÿ“š Further Reading ๐Ÿ“‚ View Other Topics on M-ary QAM, M-ary PSK, QPSK ... ๐Ÿงฎ Online Simulator for Constellation Diagram of m-ary QAM ๐Ÿงฎ Online Simulator for Constellation Diagram of m-ary PSK ๐Ÿงฎ MATLAB Code for BER calculation of ASK, FSK, and PSK ๐Ÿงฎ MATLAB Code for BER calculation of Alamouti Scheme ๐Ÿงฎ Different approaches to calculate BER vs SNR What is Bit Error Rate (BER)? The abbreviation BER stands for Bit Error Rate, which indicates how many corrupted bits are received (after the demodulation process) compared to the total number of bits sent in a communication process. BER = (number of bits received in error) / (total number of tran...

MATLAB Code for ASK, FSK, and PSK

๐Ÿ“˜ Overview & Theory ๐Ÿงฎ MATLAB Code for ASK ๐Ÿงฎ MATLAB Code for FSK ๐Ÿงฎ MATLAB Code for PSK ๐Ÿงฎ Simulator for binary ASK, FSK, and PSK Modulations ๐Ÿ“š Further Reading ASK, FSK & PSK HomePage MATLAB Code MATLAB Code for ASK Modulation and Demodulation % The code is written by SalimWireless.Com % Clear previous data and plots clc; clear all; close all; % Parameters Tb = 1; % Bit duration (s) fc = 10; % Carrier frequency (Hz) N_bits = 10; % Number of bits Fs = 100 * fc; % Sampling frequency (ensure at least 2*fc, more for better representation) Ts = 1/Fs; % Sampling interval samples_per_bit = Fs * Tb; % Number of samples per bit duration % Generate random binary data rng(10); % Set random seed for reproducibility binary_data = randi([0, 1], 1, N_bits); % Generate random binary data (0 or 1) % Initialize arrays for continuous signals t_overall = 0:Ts:(N_bits...

Antenna Gain-Combining Methods - EGC, MRC, SC, and RMSGC

๐Ÿ“˜ Overview ๐Ÿงฎ Equal gain combining (EGC) ๐Ÿงฎ Maximum ratio combining (MRC) ๐Ÿงฎ Selective combining (SC) ๐Ÿงฎ Root mean square gain combining (RMSGC) ๐Ÿงฎ Zero-Forcing (ZF) Combining ๐Ÿงฎ MATLAB Code ๐Ÿ“š Further Reading  There are different antenna gain-combining methods. They are as follows. 1. Equal gain combining (EGC) 2. Maximum ratio combining (MRC) 3. Selective combining (SC) 4. Root mean square gain combining (RMSGC) 5. Zero-Forcing (ZF) Combining  1. Equal gain combining method Equal Gain Combining (EGC) is a diversity combining technique in which the receiver aligns the phase of the received signals from multiple antennas (or channels) but gives them equal amplitude weight before summing. This means each received signal is phase-corrected to be coherent with others, but no scaling is applied based on signal strength or channel quality (unlike MRC). Mathematically, for received signa...

Constellation Diagrams of ASK, PSK, and FSK

๐Ÿ“˜ Overview of Energy per Bit (Eb / N0) ๐Ÿงฎ Online Simulator for constellation diagrams of ASK, FSK, and PSK ๐Ÿงฎ Theory behind Constellation Diagrams of ASK, FSK, and PSK ๐Ÿงฎ MATLAB Codes for Constellation Diagrams of ASK, FSK, and PSK ๐Ÿ“š Further Reading ๐Ÿ“‚ Other Topics on Constellation Diagrams of ASK, PSK, and FSK ... ๐Ÿงฎ Simulator for constellation diagrams of m-ary PSK ๐Ÿงฎ Simulator for constellation diagrams of m-ary QAM BASK (Binary ASK) Modulation: Transmits one of two signals: 0 or -√Eb, where Eb​ is the energy per bit. These signals represent binary 0 and 1.    BFSK (Binary FSK) Modulation: Transmits one of two signals: +√Eb​ ( On the y-axis, the phase shift of 90 degrees with respect to the x-axis, which is also termed phase offset ) or √Eb (on x-axis), where Eb​ is the energy per bit. These signals represent binary 0 and 1.  BPSK (Binary PSK) Modulation: Transmits one of two signals...

Comparisons among ASK, PSK, and FSK | And the definitions of each

๐Ÿ“˜ Comparisons among ASK, FSK, and PSK ๐Ÿงฎ Online Simulator for calculating Bandwidth of ASK, FSK, and PSK ๐Ÿงฎ MATLAB Code for BER vs. SNR Analysis of ASK, FSK, and PSK ๐Ÿ“š Further Reading ๐Ÿ“‚ View Other Topics on Comparisons among ASK, PSK, and FSK ... ๐Ÿงฎ Comparisons of Noise Sensitivity, Bandwidth, Complexity, etc. ๐Ÿงฎ MATLAB Code for Constellation Diagrams of ASK, FSK, and PSK ๐Ÿงฎ Online Simulator for ASK, FSK, and PSK Generation ๐Ÿงฎ Online Simulator for ASK, FSK, and PSK Constellation ๐Ÿงฎ Some Questions and Answers Modulation ASK, FSK & PSK Constellation MATLAB Simulink MATLAB Code Comparisons among ASK, PSK, and FSK    Comparisons among ASK, PSK, and FSK Comparison among ASK, FSK, and PSK Parameters ASK FSK PSK Variable Characteristics Amplitude Frequency ...

BER performance of QPSK with BPSK, 4-QAM, 16-QAM, 64-QAM, 256-QAM, etc

๐Ÿ“˜ Overview ๐Ÿ“š QPSK vs BPSK and QAM: A Comparison of Modulation Schemes in Wireless Communication ๐Ÿ“š Real-World Example ๐Ÿงฎ MATLAB Code ๐Ÿ“š Further Reading   QPSK provides twice the data rate compared to BPSK. However, the bit error rate (BER) is approximately the same as BPSK at low SNR values when gray coding is used. On the other hand, QPSK exhibits similar spectral efficiency to 4-QAM and 16-QAM under low SNR conditions. In very noisy channels, QPSK can sometimes achieve better spectral efficiency than 4-QAM or 16-QAM. In practical wireless communication scenarios, QPSK is commonly used along with QAM techniques, especially where adaptive modulation is applied. Modulation Bits/Symbol Points in Constellation Usage Notes BPSK 1 2 Very robust, used in weak signals QPSK 2 4 Balanced speed & reliability 4-QAM ...

MATLAB code for BER vs SNR for M-QAM, M-PSK, QPSk, BPSK, ...

๐Ÿงฎ MATLAB Code for BPSK, M-ary PSK, and M-ary QAM Together ๐Ÿงฎ MATLAB Code for M-ary QAM ๐Ÿงฎ MATLAB Code for M-ary PSK ๐Ÿ“š Further Reading MATLAB Script for BER vs. SNR for M-QAM, M-PSK, QPSK, BPSK % Written by Salim Wireless clc; clear; close all; num_symbols = 1e5; snr_db = -20:2:20; psk_orders = [2, 4, 8, 16, 32]; qam_orders = [4, 16, 64, 256]; ber_psk_results = zeros(length(psk_orders), length(snr_db)); ber_qam_results = zeros(length(qam_orders), length(snr_db)); for i = 1:length(psk_orders) psk_order = psk_orders(i); for j = 1:length(snr_db) data_symbols = randi([0, psk_order-1], 1, num_symbols); modulated_signal = pskmod(data_symbols, psk_order, pi/psk_order); received_signal = awgn(modulated_signal, snr_db(j), 'measured'); demodulated_symbols = pskdemod(received_signal, psk_order, pi/psk_order); ber_psk_results(i, j) = sum(data_symbols ~= demodulated_symbols) / num_symbols; end end for i...

Theoretical vs. simulated BER vs. SNR for ASK, FSK, and PSK

๐Ÿ“˜ Overview ๐Ÿงฎ Simulator for calculating BER ๐Ÿงฎ MATLAB Codes for calculating theoretical BER ๐Ÿงฎ MATLAB Codes for calculating simulated BER ๐Ÿ“š Further Reading BER vs. SNR denotes how many bits in error are received for a given signal-to-noise ratio, typically measured in dB. Common noise types in wireless systems: 1. Additive White Gaussian Noise (AWGN) 2. Rayleigh Fading AWGN adds random noise; Rayleigh fading attenuates the signal variably. A good SNR helps reduce these effects. Simulator for calculating BER vs SNR for binary ASK, FSK, and PSK Calculate BER for Binary ASK Modulation Enter SNR (dB): Calculate BER Calculate BER for Binary FSK Modulation Enter SNR (dB): Calculate BER Calculate BER for Binary PSK Modulation Enter SNR (dB): Calculate BER BER vs. SNR Curves MATLAB Code for Theoretical BER % The code is written by SalimWireless.Com clc; clear; close all; % SNR va...