Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Polar code (coding theory)

From Wikipedia, the free encyclopedia

In information theory, polar codes are a linear block error-correcting codes. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability (in other words, they polarize or become sparse), and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) with polynomial dependence on the gap to capacity.[1] Polar codes were developed by Erdal Arikan, a professor of electrical engineering at Bilkent University.

Notably, polar codes have modest encoding and decoding complexity O(n log n), which renders them attractive for many applications. Moreover, the encoding and decoding energy complexity of generalized polar codes can reach the fundamental lower bounds for energy consumption of two dimensional circuitry to within an O(nε polylog n) factor for any ε > 0.[2]

Industrial applications

[edit]

Polar codes have some limitations when used in industrial applications. Primarily, the original design of the polar codes achieves capacity when block sizes are asymptotically large with a successive cancellation decoder. However, with the block sizes used in industry, the performance of the successive cancellation is poor compared to well-defined and implemented coding schemes such as low-density parity-check code (LDPC) and turbo code. Polar performance can be improved with successive cancellation list decoding, but its usability in real applications is still questionable due to very poor implementation efficiencies caused by the iterative approach.[3]

In October 2016, Huawei announced that it had achieved 27 Gbit/s in 5G field trial tests using polar codes for channel coding. The improvements have been introduced so that the channel performance has now almost closed the gap to the Shannon limit, which sets the bar for the maximum rate for a given bandwidth and a given noise level.[4]

In November 2016, 3GPP agreed to adopt polar codes for the eMBB (Enhanced Mobile Broadband) control channels for the 5G NR (New Radio) interface. At the same meeting, 3GPP agreed to use LDPC for the corresponding data channel.[5][6]

PAC codes

[edit]

In 2019, Arıkan suggested to employ a convolutional pre-transformation before polar coding. These pre-transformed variant of polar codes were dubbed polarization-adjusted convolutional (PAC) codes.[7] It was shown that the pre-transformation can effectively improve the distance properties of polar codes by reducing the number of minimum-weight and in general small-weight codewords,[8] resulting in the improvement of block error rates under near maximum likelihood (ML) decoding algorithm such as Fano decoding and list decoding.[9] Fano decoding is a tree search algorithm that determines the transmitted codeword by utilizing an optimal metric function to efficiently guide the search process.[10] PAC codes are also equivalent to post-transforming polar codes with certain cyclic codes.[11]At short blocklengths, such codes outperform both convolutional codes and CRC-aided list decoding of conventional polar codes.[12][13]

Neural Polar Decoders

[edit]

Neural Polar Decoders (NPDs)[14] are an advancement in channel coding that combine neural networks (NNs) with polar codes, providing unified decoding for channels with or without memory, without requiring an explicit channel model. They use four neural networks to approximate the functions of polar decoding: the embedding (E) NN, the check-node (F) NN, the bit-node (G) NN, and the embedding-to-LLR (H) NN. The weights of these NNs are determined by estimating the mutual information of the synthetic channels. By the end of training, the weights of the NPD are fixed and can then be used for decoding.

The computational complexity of NPDs is determined by the parameterization of the neural networks, unlike successive cancellation (SC) trellis decoders,[15] whose complexity is determined by the channel model and are typically used for finite-state channels (FSCs). The computational complexity of NPDs is , where is the number of hidden units in the neural networks, is the dimension of the embedding, and is the block length. In contrast, the computational complexity of SC trellis decoders is , where is the state space of the channel model.

NPDs can be integrated into SC decoding[1] schemes such as SC list decoding and CRC-aided SC decoding.[16] They are also compatible with non-uniform and i.i.d. input distributions by integrating them into the Honda-Yamamoto scheme.[17] This flexibility allows NPDs to be used in various decoding scenarios, improving error correction performance while maintaining manageable computational complexity.

References

[edit]
  1. ^ a b Arikan, Erdal (2009). "Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels". IEEE Transactions on Information Theory. 55 (7): 3051–3073. arXiv:0807.3917. doi:10.1109/TIT.2009.2021379. ISSN 0018-9448.
  2. ^ Blake, Christopher G. (2017). "Energy Consumption of Error Control Coding Circuits" (PDF). University of Toronto. Retrieved 2019-10-18.
  3. ^ Arikan, Erdal; Najeeb ul Hassan; Lentmaier, Michael; Montorsi, Guido; Sayir, Jossy (2015). "Challenges and some new directions in channel coding". arXiv:1504.03916 [cs.IT].
  4. ^ "Huawei achieves 27Gbps 5G speeds with Polar Code". Retrieved 2016-10-10.
  5. ^ "3GPP RAN1 meeting #87 final report". 3GPP. Retrieved 31 August 2017.[dead link]
  6. ^ M. Rowshan, M. Qiu, Y. Xie, X. Gu and J. Yuan, "Channel Coding Toward 6G: Technical Overview and Outlook," in IEEE Open Journal of the Communications Society, vol. 5, pp. 2585-2685, 2024, doi: 10.1109/OJCOMS.2024.3390000.
  7. ^ E. Arıkan, "From sequential decoding to channel polarization and back again", 2019, arXiv:1908.09594
  8. ^ M. Rowshan and J. Yuan, "On the Minimum Weight Codewords of PAC Codes: The Impact of Pre-Transformation," in IEEE Journal on Selected Areas in Information Theory, vol. 4, pp. 487-498, 2023, doi: 10.1109/JSAIT.2023.3312678.
  9. ^ M. Rowshan, A. Burg and E. Viterbo, "Polarization-Adjusted Convolutional (PAC) Codes: Sequential Decoding vs List Decoding," in IEEE Transactions on Vehicular Technology, vol. 70, no. 2, pp. 1434-1447, Feb. 2021, doi: 10.1109/TVT.2021.3052550.
  10. ^ M. Moradi, "On Sequential Decoding Metric Function of Polarization-Adjusted Convolutional (PAC) Codes," IEEE Transactions on Communications, vol. 69, no. 12, pp. 7913-7922, 2021, doi: 10.1109/TCOMM.2021.3111018.
  11. ^ M. Moradi, "Polarization-adjusted convolutional (PAC) codes as a concatenation of inner cyclic and outer polar- and Reed-Muller-like codes," Finite Fields and Their Applications, vol. 93, p. 102321, 2024, doi: https://doi.org/10.1016/j.ffa.2023.102321.
  12. ^ Moradi, Mohsen; Mozammel, Amir; Qin, Kangjian; Arikan, Erdal (2020). "Performance and Complexity of Sequential Decoding of PAC Codes". arXiv:2012.04990 [cs.IT].
  13. ^ Yao, Hanwen; Fazeli, Arman; Vardy, Alexander (2021). "List Decoding of Arıkan's PAC Codes". Entropy. 23 (7): 841. arXiv:2005.13711. Bibcode:2021Entrp..23..841Y. doi:10.3390/e23070841. PMC 8303677. PMID 34209050.
  14. ^ Aharoni, Ziv; Huleihel, Bashar; Pfister, Henry D.; Permuter, Haim H. (2023-09-06). "Data-Driven Neural Polar Codes for Unknown Channels With and Without Memory". arXiv:2309.03148 [cs.IT].
  15. ^ Wang, Runxin; Honda, Junya; Yamamoto, Hirosuke; Liu, Rongke; Hou, Yi (2015). "Construction of polar codes for channels with memory". 2015 IEEE Information Theory Workshop - Fall (ITW). IEEE. pp. 187–191. doi:10.1109/ITWF.2015.7360760. ISBN 978-1-4673-7852-9.
  16. ^ Tal, Ido; Vardy, Alexander (2015). "List Decoding of Polar Codes". IEEE Transactions on Information Theory. 61 (5): 2213–2226. arXiv:1206.0050. doi:10.1109/TIT.2015.2410251. ISSN 0018-9448.
  17. ^ Honda, Junya; Yamamoto, Hirosuke (2013). "Polar Coding Without Alphabet Extension for Asymmetric Models". IEEE Transactions on Information Theory. 59 (12): 7829–7838. doi:10.1109/TIT.2013.2282305. ISSN 0018-9448.
[edit]