Implementation and performance analysis of convolution error correcting codes with code rate=1/2.

Similar documents
An Implementation of a Forward Error Correction Technique using Convolution Encoding with Viterbi Decoding

Implementation of CRC and Viterbi algorithm on FPGA

Hardware Implementation of Viterbi Decoder for Wireless Applications

Adaptive decoding of convolutional codes

FPGA Implementation of Convolutional Encoder And Hard Decision Viterbi Decoder

BER Performance Comparison of HOVA and SOVA in AWGN Channel

Implementation of a turbo codes test bed in the Simulink environment

SDR Implementation of Convolutional Encoder and Viterbi Decoder

NUMEROUS elaborate attempts have been made in the

VHDL IMPLEMENTATION OF TURBO ENCODER AND DECODER USING LOG-MAP BASED ITERATIVE DECODING

A Robust Turbo Codec Design for Satellite Communications

Cyclic Channel Coding algorithm for Original and Received Voice Signal at 8 KHz using BER performance through Additive White Gaussian Noise Channel

Performance Analysis of Convolutional Encoder and Viterbi Decoder Using FPGA

Higher-Order Modulation and Turbo Coding Options for the CDM-600 Satellite Modem

Performance Improvement of AMBE 3600 bps Vocoder with Improved FEC

Error Performance Analysis of a Concatenated Coding Scheme with 64/256-QAM Trellis Coded Modulation for the North American Cable Modem Standard

HYBRID CONCATENATED CONVOLUTIONAL CODES FOR DEEP SPACE MISSION

Performance of a Low-Complexity Turbo Decoder and its Implementation on a Low-Cost, 16-Bit Fixed-Point DSP

Design of Low Power Efficient Viterbi Decoder

FPGA Implementation of Convolutional Encoder and Adaptive Viterbi Decoder B. SWETHA REDDY 1, K. SRINIVAS 2

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Technical report on validation of error models for n.

AN UNEQUAL ERROR PROTECTION SCHEME FOR MULTIPLE INPUT MULTIPLE OUTPUT SYSTEMS. M. Farooq Sabir, Robert W. Heath and Alan C. Bovik

FPGA Based Implementation of Convolutional Encoder- Viterbi Decoder Using Multiple Booting Technique

Optimization of Multi-Channel BCH Error Decoding for Common Cases. Russell Dill Master's Thesis Defense April 20, 2015

Design Project: Designing a Viterbi Decoder (PART I)

Operating Bio-Implantable Devices in Ultra-Low Power Error Correction Circuits: using optimized ACS Viterbi decoder

Low Power Viterbi Decoder Designs

LOW POWER VLSI ARCHITECTURE OF A VITERBI DECODER USING ASYNCHRONOUS PRECHARGE HALF BUFFER DUAL RAILTECHNIQUES

Part 2.4 Turbo codes. p. 1. ELEC 7073 Digital Communications III, Dept. of E.E.E., HKU

Robust Transmission of H.264/AVC Video using 64-QAM and unequal error protection

Review paper on study of various Interleavers and their significance

FPGA Implementaion of Soft Decision Viterbi Decoder

FPGA Implementation OF Reed Solomon Encoder and Decoder

REDUCED-COMPLEXITY DECODING FOR CONCATENATED CODES BASED ON RECTANGULAR PARITY-CHECK CODES AND TURBO CODES

Commsonic. (Tail-biting) Viterbi Decoder CMS0008. Contact information. Advanced Tail-Biting Architecture yields high coding gain and low delay.

THE USE OF forward error correction (FEC) in optical networks

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab

Viterbi Decoder User Guide

Robust Transmission of H.264/AVC Video Using 64-QAM and Unequal Error Protection

Analog Sliding Window Decoder Core for Mixed Signal Turbo Decoder

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

Transmission Strategies for 10GBase-T over CAT- 6 Copper Wiring. IEEE Meeting November 2003

P SNR r,f -MOS r : An Easy-To-Compute Multiuser

data and is used in digital networks and storage devices. CRC s are easy to implement in binary

Optimum Frame Synchronization for Preamble-less Packet Transmission of Turbo Codes

A High- Speed LFSR Design by the Application of Sample Period Reduction Technique for BCH Encoder

An Efficient Viterbi Decoder Architecture

Design And Implementation Of Coding Techniques For Communication Systems Using Viterbi Algorithm * V S Lakshmi Priya 1 Duggirala Ramakrishna Rao 2

A Discrete Time Markov Chain Model for High Throughput Bidirectional Fano Decoders

VITERBI DECODER FOR NASA S SPACE SHUTTLE S TELEMETRY DATA

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

TERRESTRIAL broadcasting of digital television (DTV)

Analysis of Various Puncturing Patterns and Code Rates: Turbo Code

An Approach for Adaptively Approximating the Viterbi Algorithm to Reduce Power Consumption while Decoding Convolutional Codes

Design of Polar List Decoder using 2-Bit SC Decoding Algorithm V Priya 1 M Parimaladevi 2

CONVOLUTION ENCODING AND VITERBI DECODING BASED ON FPGA USING VHDL

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

FPGA Implementation of Viterbi Decoder

A Novel Turbo Codec Encoding and Decoding Mechanism

Unequal Error Protection Codes for Wavelet Image Transmission over W-CDMA, AWGN and Rayleigh Fading Channels

DATA COMPRESSION USING THE FFT

Frame Synchronization in Digital Communication Systems

EFFECT OF CODE RATE VARIATION ON PERFORMANCE OFOPTICAL CONVOLUTIONALLY CODED IDMA USING RANDOM AND TREE INTERLEAVERS

MULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora

Design and Implementation of Encoder and Decoder for SCCPM System Based on DSP Xuebao Wang1, a, Jun Gao1, b and Gaoqi Dou1, c

Investigation of the Effectiveness of Turbo Code in Wireless System over Rician Channel

Novel Correction and Detection for Memory Applications 1 B.Pujita, 2 SK.Sahir

Design and Implementation of Encoder for (15, k) Binary BCH Code Using VHDL

The Design of Efficient Viterbi Decoder and Realization by FPGA

Constant Bit Rate for Video Streaming Over Packet Switching Networks

Course Plan. Course Articulation Matrix: Mapping of Course Outcomes (COs) with Program Outcomes (POs) PSO-1 PSO-2

TRELLIS decoding is pervasive in digital communication. Parallel High-Throughput Limited Search Trellis Decoder VLSI Design

Color Image Compression Using Colorization Based On Coding Technique

Video compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and

CCSDS TELEMETRY CHANNEL CODING: THE TURBO CODING OPTION. Gian Paolo Calzolari #, Enrico Vassallo #, Sandi Habinc * ABSTRACT

Minimax Disappointment Video Broadcasting

ITERATIVE DECODING FOR DIGITAL RECORDING SYSTEMS

PAPER Wireless Multi-view Video Streaming with Subcarrier Allocation

Using enhancement data to deinterlace 1080i HDTV

FREE TV AUSTRALIA OPERATIONAL PRACTICE OP- 59 Measurement and Management of Loudness in Soundtracks for Television Broadcasting

VA08V Multi State Viterbi Decoder. Small World Communications. VA08V Features. Introduction. Signal Descriptions

DICOM medical image watermarking of ECG signals using EZW algorithm. A. Kannammal* and S. Subha Rani

PRACTICAL PERFORMANCE MEASUREMENTS OF LTE BROADCAST (EMBMS) FOR TV APPLICATIONS

On the design of turbo codes with convolutional interleavers

International Journal of Engineering Research-Online A Peer Reviewed International Journal

SECQ Test Method and Calibration Improvements

Keywords Xilinx ISE, LUT, FIR System, SDR, Spectrum- Sensing, FPGA, Memory- optimization, A-OMS LUT.

Study of White Gaussian Noise with Varying Signal to Noise Ratio in Speech Signal using Wavelet

Bit Rate Control for Video Transmission Over Wireless Networks

Error Concealment for SNR Scalable Video Coding

IMPROVING TURBO CODES THROUGH CODE DESIGN AND HYBRID ARQ

Video Transmission. Thomas Wiegand: Digital Image Communication Video Transmission 1. Transmission of Hybrid Coded Video. Channel Encoder.

Multirate Signal Processing: Graphical Representation & Comparison of Decimation & Interpolation Identities using MATLAB

100Gb/s Single-lane SERDES Discussion. Phil Sun, Credo Semiconductor IEEE New Ethernet Applications Ad Hoc May 24, 2017

Performance Evaluation of Error Resilience Techniques in H.264/AVC Standard

Detection and demodulation of non-cooperative burst signal Feng Yue 1, Wu Guangzhi 1, Tao Min 1

EMBEDDED ZEROTREE WAVELET CODING WITH JOINT HUFFMAN AND ARITHMETIC CODING

OVE EDFORS ELECTRICAL AND INFORMATION TECHNOLOGY

Transcription:

2016 International Conference on Micro-Electronics and Telecommunication Engineering Implementation and performance analysis of convolution error correcting codes with code rate=1/2. Neha Faculty of engineering and technology Jamia Hamdard New Delhi, India e-mail: nehanit2807@gmail.com Abstract- Channel coding plays a vital role for the maintenance of quality and reliability in data delivery in a communication system. Practical applications of channel coding includes the area of deep space communication, satellite communication, wireless communication, data transmission etc. Convolution coding is one of large class among them. The main aim of this paper is to elucidate the main features of Convolution coding. This paper explicating the charming reasons which make use of this coding scheme very fluent in long distance communication. Accuracy of data is rationed due to surge in noise level.various features and performance criteria of encoder and decoder has been discussed which give a pertinent ideas to design convolution code. The code is implemented and analysed with varying the Constraint length, number of memory register at code rate=1/2 in AWGN channel. eywords- Channel coding, Convolution encoder, Viterbi decoder. I. Introduction Codes are used for data compression, cryptography, and correction. Codes are first categorized in two forms: a. Source codes: which take data from information source and make it smaller (compressed form).b. Channel Codes: Channel coding refers to the class of signal transformation which improve the performance of communication system. The goal of Channel coding is to find code which transmit quickly, contain many valid code word and can correct or at least detect many errors. Channel coding is very appreciable tool in the design of reliable communication system. Channel coding provides improved error performance by adding redundant information to the input data being transmitted through a channel. Channel coding is having two major form: block coding and convolutional coding [1,9,12]. 1. Data compression (or source coding) 2. Error correction (or channel coding) A functional block diagram illustrating the signal flow and the signal-processing steps through a typical communication system are shown in Figure 2 [1].equations, graphics, and tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow. II. Convolution Codes A convolutional encoder accepts a sequence of message symbols and produces a sequence of code symbols. Its computations depends upon the current set of input symbols as well as the previous input symbols [2]. A block diagram derived from Figure 2, is shown in Figure 3. a. Convolution Codes A convolutional encoder is formed by a fixed number of shift registers. Every input bit get in a shift register and the outcomming of the encoder is perceived by summing the bits in the shift register. The number of outcoming bits depends on the number of modulo 2-adders applied with shift registers. A convolution codes is described by integers m, n, k and. Where, n = number of output bits (Commonly ranges from 1 to 8) k = number of input bits (Commonly ranges from 1 to 8) = constraint length (Commonly ranges from 3 to 9) m= number of memory registers. (Commonly ranges from 2 to ) The quantity k/n is known by the code rate is a measure of the bandwidth efficiency of the code. The code rate from 1/8 to 7/8 other than for deep space implementation where code rates as low as 1/0 (usually below 0.90), or even longer can be employed [3]. The encoder structure of (n=2,k=1=,3) convolution encoder is shown below in figure 4. It is easy to construct a convolutional encoder. We first draw m boxes representing the m memory registers.then we draw n modulo-2 adders (here n=2) representing the n output bits. Finally, we connect the memory registers to the adders using the bits specifying the generator polynomials. The code rate of above encoder is ½. The (2, 1, 3) convolutional encoder has four states, which are: 00, 01,, 11.The encoder operation can be illustrated by any of the three means i.e State diagram representation, Trellis representation, Tree diagram representation. b. Viterbi decoder The Viterbi decoding algorithm was introduced and analysed by Viterbi in 1967.Viterbi decoding was the most effective decoding technique for short constraint length codes by Heller [4,5]. Forney [6] and Omura [ 7].The great advantage of the Viterbi maximum likelihood decoder is that the number of decoder operations performed in decoding N bits is only N2k(-1),which is linear in N. The basic Viterbi decoder consists of five functional units; an input or branch metric calculation section, an ACS arithmetic section, and a path memory and output section. Information can be thought of as passing successively from one section to the next. The branch metric calculation section accepts the input data and 978-1-5090-3411-6/16 $31.00 2016 IEEE DOI.19/ICMETE.2016.111 484 482

Figure 1.Classification of codes Figure 2 Block diagram of communication system with source and channel encoding. Figure 3 Encode/decode and modulate/demodulate portions of a communication link. The decision is transmitted to the path memory section and the larger of the two sums becomes a new state metric. One ACS function must be performed for each of the 2-1 states. The path memory section must store about a 4 constraint length history of decisions for each state. The memory requirements are thus nontrivial [8,9,]. Viterbi decoder is further categorized in two parts: hard decision and soft decision. The paper includes the hard decision method. Hard decision is inexpensive and require less memory as compared to soft decision. For calculating the error capability of the code the equation of error correcting capability is written in context of free distance df. Maximum number of guaranteed correctable errors per code word(t)= [(df-1)/2], where df = free distance which is defined as the minimum distance in the set of all arbitrarily long path diverges or remerges [1]. Table 1 and 2 has listed the SNR (Eb/No) in db achieved with MATLAB implementation using Uncoded and coded polynomial with standard as well as tested polynomial. The variation of coding gain with respect to memory size is summarized in figure 12. III. Implementation and Results Convolution code with Viterbi algorithm (hard decision) is implemented and analysed in AWGN channel with,000 bits as an input. And coding gain is calculated for each plot. Coding gain is defined as the reduction in E b/n o that can be realized through the use of code. It is calculated in db. The connection vectors or polynomial generators of a convolutional code are usually selected based on the code s free distance properties. Figure 6 to 12 shows relation between BER and E b/n o of generator polynomials (tested & standard)[,11]. Based on which SNR (in db)as well as coding gain have been calculated and mentioned in tabular form table 1 and table 2. It can be concluded that for a constant BER coding gain increases with increase in memory size. Its value also increases when BER decreases for the same encoder size. As the value of coding gain at encoder size =2 is 0.1 db for BER=-2 and 1.6 db for BER=-4. On calculation of the coding gain for code rate ½, while varying the value of constraint length it is observed that the performance of the system improved in coded curve. The coding gain corresponding to various memory size and polynomial is tabulated below in table 3 and 4 respectively. Figure 4 A (2,1,3) convolution encoder calculates (or looks up) the metric for each distinct branch. For a rate 1/2 code, four branches are possible corresponding to transmission of 00, 01,, and 11, The only section of the decoder that is directly concerned with the number of bits of quantization of the received data, and hence, the only section whose complexity is directly dependent on quantization. The ACS sections perform the basic arithmetic calculation of the decoder. For a rate l / w decoder, an ACS is used to add the state metrics for two states to the appropriate branch metrics, to compare the resulting two sums, and to select the larger. 485 483

Figure 5. Plot of BER vs. Eb/No for a rate 1/2, m=2 encoder Figure 7 Plot of BER vs. Eb/No for a rate ½, m = 4 encoder Figure 6 Plot of BER vs. Eb/No for a rate ½, m = 3 encoder Figure 8 Plot of BER vs. Eb/No for a rate ½, m = 5encoder 486 484

Figure 9 Plot of BER vs. Eb/No for a rate ½,m=6 encoder Figure11 Plot of BER vs. Eb/No for a rate ½, m=8encoder Figure Plot of BER vs. Eb/No for a rate ½,m=7encoder Figure 12. Plot between Coding Gain Vs Memory size (m) for code rate=1/2. Table 1. SNR (Eb/No) in db for the rate ½ with =3 to 9 (Standard polynomial). BER Unco ded -2 4.2 4.1 4. -3 6.7 5.7 5. -4 8.3 7.1 6. 2 3 4 5 6 7 8 =3 = 4 =5 =6 =7 =8 =9 4. 4.2 4. 3.6 3.5 1 5 1 5. 5.2 4. 4.5 4.4 8 5 9 6. 6 5. 5.3 5.1 8 4 7 Table 2. SNR (Eb/No) in db for the rate ½ convolution encoder for Tested polynomial BER Unco ded 2 3 4 5 6 7 8-2 4.2 4.1 4 3.9 3.9 3.6 3.7 3.3-3 6.7 5.6 5.4 5.1 5 4.8 4.7 4.6-4 8.3 6.7 6.9-5.7 5.9 7-487 485

Table 3 Coding gain in db for the rate ½ convolution encoder for Tested polynomial. BER Uncoded 2 3 4 5 6 7 8-2 4.2 0.1 0.2 0.3 0.3 0.6 0.5 0.9-3 6.7 1.1 1.3 1.6 1.7 1.9 2 2.1-4 8.3 1.6 1.4-2.3 2.4 1.3 - Table 4 Coding gain in db for the rate ½ convolution encoder for Tested polynomial. B Un 2 3 4 5 6 7 8 E cod = = = = = = =9 R ed 3 4 5 6 7 8 4.2 0.1 0.1 0.1 0.1 0.1 0.6 0.7-2 -3-4 6.7 1 0.9 0.9 1.5 1.8 2.2 2.3 8.3 1.2 1.5 1.9 2.3 2.6 3 3.2 iv Conclusion An error correction technique particularly suited for a white Gaussian noise (AWGN) channel, which has been implemented using convolutional encoding with Viterbi decoding. The basic design and implementation of convolution encoder and Viterbi decoder with its essential blocks is discussed here. It is seen, when compared with Figures/Tables that the performance of the convolution code further improves with an encoder having greater number of memory registers or the large constraint length, i.e., a larger value for m or respectively for the same code rate. As was seen from the simulation results obtained, the performance was improved using a convolutional encoder with a larger encoder memory size. Viterbi has derived standard bounds to bit error probability these bounds are particularly tight for the AWGN channel for error probabilities less than about -4. It can be concluded that for a constant BER coding gain increases with increase in memory size. Its value also increases when BER decreases for the same encoder size. This bound has been numerically evaluated over a range of Eb/No. The upper bound supplies performance data at extremely low bit error rates, where simulation results are not available due to excessive computer time required. Finally the results presented here provide the information necessary to evaluate the applicability of Convolution coding & Viterbi decoding to space,satellite or long distance communication systems with a wide range of requirements and constraints. REFERENCES [1] Sklar, Bernard. Digital communications. Vol. 2. NJ: Prentice Hall, 2001. [2] umawat, Himmat Lal, and Sandhya Sharma. "An Implementation of a Forward Error Correction Technique using Convolution Encoding with Viterbi Decoding." [3] Jadhao, Mr Vishal G., and Prafulla D. Gawande. "Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their Comparative Effectiveness." [4] Heller, J. A. "Short constraint length convolutional codes." Space Program Summary 37 54 (1968): 171-177. [5] Heller, J. A. "Improved performance of short constraint length convolutional codes." Jet Prop. Lab., Space Prog. Summary: 37-56. [6] Heller, Jerold, and Irwin Jacobs. "Viterbi decoding for satellite and space communication." Communication Technology, IEEE Transactions on 19.5 (1971): 835-848. [7] Viterbi, Andrew J., and Jim. Omura. Principles of digital communication and coding. Courier Corporation, 2013. [8] Heller, Jerold, and Irwin Jacobs. "Viterbi decoding for satellite and space communication." Communication Technology, IEEE Transactions on 19.5 (1971): 835-848. [9] Shu, Lin, Shu Lin, and Daniel J. Costello. Error control coding. Pearson Education India, 2004. [] Jacobs, I. "Practical applications of coding." Information Theory, IEEE Transactions on 20.3 (1974): 305-3. [11] Sweeney, Peter. Error control coding. Prentice Hall U, 1991. [12] Peterson, William Wesley, and Edward J. Weldon. Error-correcting codes. MIT press, 1972. 488 486