Ethan Liang

Photo of Ethan 

PhD Student,
Electrical Engineering,
Stanford University
E_M_A_I_L: emliang [@] stanford [DOT] edu

About me

I received a B.S. degree in electrical engineering (Summa Cum Laude) from the University of California: Los Angeles (UCLA). At UCLA, I worked with Prof. Richard Wesel on various channel coding problems.

Research

My research interests include

Recent Publications

Coming eventually :)

EE 359 Project (Title: Improved Decoding of Convolutional and Turbo Codes via List Decoding and Deep Learning)

Convolutional codes and Turbo codes have played an important role in error control coding in many domains, including deep-space communication and the LTE cellular standard. Although they were not selected for the Enhanced Mobile Broadband (eMBB) standard of 5G cellular, these codes may be strong candidate codes for the Massive Machine-type (mMTC) or the Ultra-Reliable and Low-Latency Communication (URLLC) 5G NR standards. The mMTC and URLLC standards differ significantly from eMBB, prioritizing low latency, high reliability, energy efficiency, and cost over bandwidth. Developing novel low-complexity and low-latency decoders that can guarantee both low frame error rates and high reliability over wireless channels is necessary to achieve these stringent requirements. Convolutional codes with list decoding approach the finite length coding bounds [1] at very short blocklengths whereas Turbo codes exhibit near Shannon capacity performance on the AWGN channel at longer blocklengths. With further improvements, both convolutional codes and turbo codes may become an attractive choice for the mMTC and URLLC standards.

To this end, my EE 359 project has two main goals. First, my project aims to improve upon the work in [2], [3] by improving upon existing list decoding algorithms for convolutional codes. In addition, I will explore possible extensions of the list decoding algorithm to turbo codes. Also, researchers [4] have shown that an RNN trained on the noisy codewords from a convolutional code can decode with performance very close to that of the Viterbi algorithm. They also showed that an RNN trained on the likelihood values produced by the BCJR algorithm for a Turbo code can achieve near-optimal performance on the AWGN channel. If time permits, I will train RNN decoders to achieve performance similar to the Viterbi and BCJR algorithms on the AWGN channel.

“”“”“Here”“”“” is the initial codebase. I have so far implemented a rate 1/n convolutional encoder for both systematic feedback, non-systematic feedforward, and tail-biting convolutional codes. I have also implemented a standard zero-terminated Viterbi decoder for systematic and non-systematic convolutional codes as well as the TBCC decoder using WAVA. The next steps are to implement the BCJR algorithm and a generalized list decoding algorithm for both convolutional and turbo codes. After this, I will proceed to train RNNs on training data generated from my code above.

References:
[1] Y. Polyanskiy, H. V. Poor and S. Verdu, “Channel Coding Rate in the Finite Blocklength Regime,” in IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2307-2359, May 2010.
[2] E. Liang, H. Yang, D. Divsalar and R. D. Wesel, “List-Decoded Tail-Biting Convolutional Codes with Distance-Spectrum Optimal CRCs for 5G.” Accepted to 2019 IEEE Global Communications Conference (GLOBECOM), Dec. 9-13, 2019, Big Island, HI, USA.
[3] H. Yang, E. Liang, H. Yao, A. Vardy, D. Divsalar and R. D. Wesel, “A List Decoding Approach to Low-Complexity Soft Maximum-Likelihood Decoding of Cyclic Codes.” Accepted to 2019 IEEE Global Communications Conference (GLOBECOM), Dec. 9-13, 2019, Big Island, HI, USA.
[4] H. Kim, Y. Jiang, R. Rana, S. Kannan, S. Oh, and P. Viswanath, “Communication Algorithms via Deep Learning,” the 6th International Conference on Learning Representations (ICLR), Vancouver, April 2018.

“”“”“Here”“”“” is the final report.

“”“”“Here”“”“” is the final codebase