Algorithm Description
The proposed adaptive beam-training procedure enables the Base Station (BS) and Mobile Station (MS) to jointly identify the optimal transmit–receive beam pair from predefined codebooks. The algorithm operates over multiple stages, where each stage progressively narrows the search space. This hierarchical refinement significantly reduces training overhead compared to exhaustive beam scanning.
1. Initialization
Both BS and MS are assumed to know the total number of angular directions N, the hierarchical resolution parameter K, and the codebooks F (for BS beams) and W (for MS beams).
The initial codebook subset indices are:
k1BS = 1, k1MS = 1.
The total number of adaptive stages is:
S = logK(N)
ensuring that the search space is reduced by a factor of K in each stage.
2. Stage-wise Beam Training
For each stage s = 1,…,S, both BS and MS test K beam candidates.
2.1 BS Transmission
For each BS beam index mBS = 1,…,K, the BS transmits using the beam:
F(s, ksBS)[:, mBS].
2.2 MS Reception
For each BS beam, the MS cycles through its own beam candidates mMS = 1,…,K, applying:
W(s, ksMS)[:, mMS].
2.3 Measurement Collection
The MS records the received signal corresponding to each transmit–receive beam pair:
ymBS = √Ps · WH · H · F + nmBS.
The MS stores all measurements in:
Y(s) = [y₁, y₂, …, yK].
3. Beam Pair Selection
The strongest beam pair is identified by maximizing the magnitude of the collected measurements:
(m*BS, m*MS) = arg maxmBS, mMS |Y(s) ⊙ Y(s)*|.
This pair represents the best-performing combination in the current stage.
4. Subset Update
The indices of the selected beam subsets for the next stage are updated using:
ks+1BS = K(m*BS − 1) + 1,
ks+1MS = K(m*MS − 1) + 1.
This selects the sub-region of the angular domain containing the strongest path.
5. Final Parameter Estimation
After completing all stages, the estimated angle parameters are:
φ̂ = φ̄kS+1BS, θ̂ = θ̄kS+1MS.
The corresponding path gain is estimated as:
α̂ = √(ρ/Ps) · G(S)[Y(s)]mMS*, mBS*.
MATLAB Code
Hierarchical Beam Training Logic
Suppose:
- N = 64 → total finest-resolution beams
- K = 4 → number of beams tested per stage
Stage 1 (coarse search)
- You divide the 64 total beams into 64 / 4 = 16 blocks, each block contains 4 beams.
- You only test 1 beam per block, so at stage 1 you test K = 4 beams (one from each candidate block).
- Goal: pick the block containing the strongest beam.
Stage 2 (intermediate search)
- You now focus on the block selected in stage 1.
- That block has 4 beams, divide it again into 4 sub-blocks, pick 1 beam per sub-block.
- You test K = 4 beams in this stage again.
- Goal: narrow down further.
Stage 3 (fine search)
- The selected sub-block now contains the final 4 beams.
- Test these 4 beams, pick the best one.
Key Points
- At each stage, you only test K beams, not all N beams.
- The algorithm zooms in progressively.
- After S = log₄(64) = 3 stages, you have identified 1 beam out of 64.
Stage Summary
| Stage | Beams tested (K) | Candidate beams in block | Notes |
|---|---|---|---|
| 1 | 4 | 64 | Coarse search |
| 2 | 4 | 16 | Focused search |
| 3 | 4 | 4 | Fine search |
At stage 1, you test 4 beams to choose one block of 16 beams.
At stage 2, you test 4 beams to choose one block of 4 beams.
At stage 3, you test 4 beams to pick the final beam.
If desired, a diagram can be drawn to show the 64 → 16 → 4 → 1 zoom-in process visually.
Summary
The algorithm carries out a multi-stage hierarchical search over the BS and MS codebooks. In every stage, it evaluates K × K beam pairs, selects the best combination, and restricts the next stage to a narrower sub-region. After S = logK(N) stages, the optimal beam pair is identified with dramatically reduced training overhead.