120
DIGITAL SIGNAL PROCESSING FOR OPTICAL OFDM BASED FUTURE OPTICAL NETWORKS UNIT-1 INTRODUCTION 1.1GENERAL Digitalsignal processing (DSP) is widely exploited in themodern world to enable a vast array of high performanceservices and devices that were unimaginable several years ago.As a highly pervasive technology, DSP considerably enhanceseveryday life by enabling applications ranging from antilockbreaking systems to satellite navigation and sophisticated medicalimaging. DSP has also been an enabler for many of thehighly successful communications technologies over the last20 years.It is only in recent years that advanced DSP has been utilizedin optical communications to realize commercial long-haul opticalsystems in the form of DSP-enable coherent optical receivers, which not only offer high transmission capacitiesof the order of 100 Gb/s per wavelength, but achieve ultrasensitivereceivers for radically increasing unrepeated transmissiondistances.To enable the wide use of DSP in other areas of optical communications,there is growing interest in the exploitation ofDSP to solve the challenges facing the future optical accessnetworks. The passive optical network (PON)has beenwidely adopted as one of the main fiber-to-the-home (FTTH)solutions capable of meeting the low-cost demands of the accessnetworks. PON technologies are expected to deliver anaggregate capacity of 40 Gb/s in the near DEPARTMENT OF ECE,BCETFW,KADAPA. Page 1

UNIT 1

Embed Size (px)

DESCRIPTION

sas

Citation preview

Page 1: UNIT 1

UNIT-1

INTRODUCTION

1.1GENERAL

Digitalsignal processing (DSP) is widely exploited in themodern world to enable a vast array of

high performanceservices and devices that were unimaginable several years ago.As a highly

pervasive technology, DSP considerably enhanceseveryday life by enabling applications ranging

from antilockbreaking systems to satellite navigation and sophisticated medicalimaging. DSP has

also been an enabler for many of thehighly successful communications technologies over the

last20 years.It is only in recent years that advanced DSP has been utilizedin optical

communications to realize commercial long-haul opticalsystems in the form of DSP-enable

coherent optical receivers, which not only offer high transmission capacitiesof the order of 100

Gb/s per wavelength, but achieve ultrasensitivereceivers for radically increasing unrepeated

transmissiondistances.To enable the wide use of DSP in other areas of optical

communications,there is growing interest in the exploitation ofDSP to solve the challenges

facing the future optical accessnetworks.

The passive optical network (PON)has beenwidely adopted as one of the main fiber-to-the-home

(FTTH)solutions capable of meeting the low-cost demands of the accessnetworks. PON

technologies are expected to deliver anaggregate capacity of 40 Gb/s in the near future and the

NGPON2standardization work has addressed this by the decisionto adopt a time-division

multiplexing/wavelength-division multiplexing(TDM/WDM) approach, this maintains the use

ofconventional on-off keying (OOK) modulation with transmissionspeeds preserved at 10 Gb/s

per wavelength. It is widelyaccepted that in the long term the future generation PON

technologiesmust exceed the 10 Gb/s per wavelength threshold tofurther increase network

capacity throughput. It is technicallyhighly challenging to achieve this with the conventional

binaryOOK modulation. DSP-enabled PON technologies on the otherhand offer far greater

flexibility in signal generation and decodingallowing compensation of signal distortions and/or

utilizationof advanced modulation formats which are inherently moretolerant to the fiber

distortion effects. DSP can also allow theuse of spectrally efficient modulation techniques, which

meansincreased network capacity can be achieved through efficientexploitation of component

bandwidths, thus great commercialbenefits may be attained if the established, mature optical

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 1

Page 2: UNIT 1

componentsources for NG-PON1 and NG-PON2 can be exploited.DSP can also enable adaptive

modulation techniques which canadapt to the varying spectral characteristics of the network

dueto the natural variations in optical fiber, optical component, andradio frequency (RF)

component characteristics. DSP can alsoenable important features such as dynamic bandwidth

allocation(DBA) to improve capacity utilization efficiency. DSP-enabledoptical access networks

can thus potentially provide network administratorswith on-demand adaptability down to the

physicallayer making the networks highly adaptable to the fluctuatingend-user service demands.

Fig. 1.System elements of DSP-based optical transceivers.

1.2 Digital signal processors in cellular radio communications

Contemporary wireless communications are based on digital communications technologies. The

recent commercial success of mobile cellular communications has been enabled in part by

successful designs of digital signal processors with appropriate on-chip memories and

specialized accelerators for digital transceiver operations. This article provides an overview of

fixed point digital signal processors and ways in which they are used in cellular communications.

Directions for future wireless-focused DSP technology developments are discussed.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 2

Page 3: UNIT 1

1.3 DSP-based architectures for mobile communications: Past, present, and

future

Programmable DSPs are pervasive in the wireless handset market for digital cellular telephony.

We present the argument that DSPs will continue to play a dominant, and in fact increasing, role

in wireless communications devices by looking at the history of DSP use in digital telephony,

examining the DSP-based solution options for today's standards, and looking at future trends in

low-power DSPs.

1.4Trends in optical access and in-building networks

As users require ever more speed, variety and personalization in ICT services, the capacity and

versatility of access networks needs to be expanded. The first generation of point-to-point and of

point-to-multipoint time-multiplexed passive optical networks (PON) is being installed. More

powerful wavelength-multiplexed and flexible hybrid wavelength-time multiplexed solutions are

coming up. Radio-over-fibre techniques create pico-cells for high-bandwidth wireless services.

Next to bringing the bandwidth luxury to the doorstep, it must be distributed inside the userpsilas

home. By advanced signal processing techniques, high-capacity wired and wireless services are

jointly distributed in a low-cost converged in-building network using multimode (plastic) optical

fibre.

1.5 Time- and wavelength-division multiplexed passive optical

network(TWDM-PON) for next-generation PON stage 2 (NG-PON2)

The next-generation passive optical network stage 2 (NG-PON2) effort was initiated by the full

service access network (FSAN) in 2011 to investigate on upcoming technologies enabling a

bandwidth increase beyond 10 Gb/s in the optical access network. The FSAN meeting in April

2012 selected the time- and wavelength-division multiplexed passive optical network (TWDM-

PON) as a primary solution to NG-PON2. In this paper, we summarize the TWDM-PON

research in FSAN by reviewing the basics of TWDM-PON and presenting the world's first full-

system 40 Gb/s TWDM-PON prototype. After introducing the TWDM-PON architecture, we

explore TWDM-PON wavelength plan options to meet the NG-PON2 requirements. TWDM-

PON key technologies and their respective level of development are further discussed to

investigate its feasibility and availability. The first full-system 40 Gb/s TWDM-PON prototype

is demonstrated to provide 40 Gb/s downstream and 10 Gb/s upstream bandwidth. This full

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 3

Page 4: UNIT 1

prototype system offers 38 dB power budget and supports 20 km distance with a 1:512 split

ratio. It coexists with commercially deployed Gigabit PON (G-PON) and 10 Gigabit PON (XG-

PON) systems. The operator-vendor joint test results testify that TWDM-PON is achievable by

the reuse and integration of commercial devices and components.

1.6 Precise characterization of the frequency chirp in directly modulated DFB

lasers

We report on results from the characterization of the frequency chirp characteristics of

distributed feedback (DFB) lasers under direct modulation conditions. Parameters describing

transient and adiabatic chirp effects are measured for a DFB laser from the ratio of phase to

amplitude modulation factors when modulated with sine waves using a high-resolution optical

spectrum analyzer. Transient and adiabatic chirp effects produced under digital non-return to

zero (NRZ) amplitude modulation are also analyzed using the emitted optical spectrum. Finally,

results from the measurement technique are compared with those obtained from measured optical

spectra.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 4

Page 5: UNIT 1

UNIT-2

BASICS CONCEPTS

STATE OF THE ART

2.1 Future optical access networks

An increase in demand for high data rates has been an important factor in the emergence of

OFDM in the optical domain, with a wide variety of solutions developed for different

applications both in the core and access networks. This emergence has been facilitated by the

intrinsic advantages of OFDM such as its high spectral efficiency [21], ease of channel and phase

estimation [13] and robustness against delay. In this section, an overview of optical access

networks is presented, covering state-of-the-art technologies, recent progress and different

application scenarios. OFDM is also presented as an effective solution to the major problems of

today’s optical access networks. The structure of this chapter is as follows: section 2.2 provides

an overview of next-generation broadband access networks. In this section, we highlight optical

fiber as probably the most viable means of meeting the ever-increasing bandwidth demand of

subscribers. The various state-of-the-art optical technologies currently being deployed for shared

fiber multiple access such as time division multiple access (TDMA), wavelength division

multiple access (WDMA), and orthogonal frequency division multiple access (OFDMA) are

explained.

Section 2.3 provides a review of some fundamental OFDM principles including the background,

basic mathematical representation, system implementations, cyclic prefix use, advantages and

disadvantages of OFDM. This literature review is essential in order to appreciate the motivation

behind applying OFDM techniques in optical communication systems. In section 2.4, some

aspects of optical modulation are presented. In section 2.5 the two optical OFDM variants that

have been introduced coherent optical OFDM (CO-OFDM) and direct-detection optical OFDM

(DD-OFDM) are examined with a focus on their corresponding transmitter and receiver side

architectures. The respective advantages and disadvantages of these two variants are also

highlighted, with emphasis placed on implementation aspects that are of importance in optical

access networks.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 5

Page 6: UNIT 1

2.2 Passive Optical Networks

PONs have high potential for high capacity data transformation and they can be operated in

shared medium which makes it economically very efficient. Basic structure of PONs is illustrated

in Figure 2-1 [22].A PON is composed by the Optical Line Terminal (OLT), the Optical Network Unit

(ONU), and peripheral devices which distribute the signals and are located in medium nodes. All in all,

the whole network is composed of two main parts, the feeder part, from OLT to the first remote node,

and Optical Distribution Network (ODN), from the first medium node to the ONUs.

Figure 2-1 Basic structure of PONs

In a PON, the information exchange transportation can be classified in two different categories

depending on the flow of the traffic, downstream channel and upstream channel which are

illustrated in Figure 2-2. In the upstream, the optical system becomes a multipoint to point

network between different ONUs and the OLT, so the optical signal must be combined using a

multiple access protocol [23]. Generally speaking, upstream is more challenging than

downstream part of the network. In order for the individual ONUs to be able to send traffic

upstream to the OLT without collisions, it is necessary to have an appropriate multiple access

schemes. In this regard, several multiple access techniques have been developed for PON

operation. These include TDMA, WDMA and OFDMA [24].

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 6

Page 7: UNIT 1

Figure 2-2 Downstream and upstream cannel scheme of a PON

To satisfy the requirements of future PON systems, several multiple-access candidate

technologies have been proposed, including time division multiple access (TDMA)-PON,

wavelength division multiplexed (WDM)-PON, OFDMA-PON [24], as well as various hybrid

options, formed from one or more of the aforementioned constituent technologies [25] [26] [27].

While entirely amenable to hybrid operation with both WDM and TDMA overlays, the

distinguishing feature of OFDMA-based PON is a pronounced reliance on electronic digital

signal processing (DSP) to tackle the key performance and cost challenges. OFDMA-PON thus

essentially extends the trend of “software-defined” (DSP-based) optical communications to next-

generation optical access [28]. The resulting volume-driven cost profile is indeed the target

regime for any technology candidate in this space.

To satisfy the requirements of future PON systems, both upstream and downstream traffic

require high-level multiplexing techniques. In the following subsection, a brief description and a

comparative study of the most relevant multiple access candidate protocols are presented.

2.2.1 TDMA

In TDMA-PONs, only one ONU can transmit or receive at a given time instant. Since the ONUs

are typically at different distances from the OLT, ranging protocols are used to ensure that each

ONU sends its data at the right time instant. These ranging protocols measure the round-trip time

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 7

Page 8: UNIT 1

(RTT) from each ONU to the OLT and then offset each RTT to the highest RTT. For TDMA-

PONs, a burst mode receiver which can handle different amplitude levels of packets is also

needed at the OLT [22]. Initially with TDMA-PONs, the bandwidth of each ONU was assigned

during ranging. This implies that the capacity of each ONU would decrease with an increase in

the number of ONUs. However, TDMA-PONs can now dynamically adjust the bandwidth of

each ONU depending on customer need. Several TMDA-PONs have been standardised. These

include broadband PON (BPON) defined by the ITU-T G.983 standard, the gigabit PON

(GPON) defined by the ITU-T G.984 standard, and the Ethernet PON (EPON) defined by the

IEEE 802.3ah standard.

2.2.2 WDMA

Typically in WDMA-PONs, each ONU uses a dedicated wavelength to transmit data to the OLT,

implying there is no need for time synchronization. This multiple wavelength arrangement

requires multiple transceivers; hence AWGs or optical filters are needed to correctly distribute

the wavelengths. Moreover, having each ONU operating at a dedicated wavelength might be

impractical because of the cost and complexity involved for network operators in managing the

inventory of lasers.

2.2.3 OFDMA

OFDMA-PON which is shown in Figure 2-3 employs OFDM as the modulation scheme and

exploits its superior transmission capability to improve the bandwidth provisioning of optical

access networks [29].

Figure 2-3 OFDMA-PON architecture

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 8

Page 9: UNIT 1

In both downlink and uplink traffics, the OFDMA-PON architecture divides the total OFDM

bandwidth in N sub-bands [1], each containing the quantity of subcarriers required by each user.

OFDM uses a large number of closely-spaced orthogonal subcarriers to carry data traffic. Each

subcarrier is modulated by a conventional modulation scheme (such as quadrature amplitude

modulation or phase-shift keying) at a low symbol rate, thus achieving the sum of the rates

provided by all subcarriers compatible to those of conventional single-carrier modulation

schemes in the same bandwidth [5]. OFDMA-PON can be combined with WDM to further

increase the bandwidth provisioning [11].

OFDMA-PON exhibits the following advantages:

• Enhanced spectral efficiency: Orthogonality among subcarriers in OFDM allows spectral

overlap of individual sub-channels. In addition, OFDM uses a simple constellation mapping

algorithm for high-order modulation schemes such as 16QAM and 8PSK. Using these

techniques, OFDM in PON makes effective use of spectral resources and improves spectral

efficiency [29] [9].

• Avoiding costly optical devices and using cheaper electronic devices: Integrated optical devices

are very costly, and optical modules of 10G or higher can significantly drive up the cost of an

access network. OFDM avoids costly optical devices and uses cheaper electronic devices.

OFDM leverages on the integration and low-cost advantages of high-speed digital signal

processors and high-frequency microwave devices to develop access networks [21] [29].

• Dynamic allocation of subcarriers: Depending on channel environments and application

scenarios, OFDM can dynamically allocate the number of bits carried by each subcarrier,

determine the modulation scheme used by each subcarrier, and adjust the transmitting power of

each subcarrier by using a simple FFT algorithm. In OFDM-PON, allocation of each subcarrier

is executed in real time according to the access distance, subscriber type, and access service [5].

• Smooth evolution to ultra-long-haul access network: A simple network structure improves the

performance of an access network and reduces costs. Converged optical core, metro, and access

network has become a hot research topic, and long reach access networks have been proposed.

Long-reach optical access suffers from the problem of high fiber chromatic dispersion. The

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 9

Page 10: UNIT 1

OFDM modulation scheme can help address the chromatic and polarization-mode dispersion in

optical links [29].

Therefore, OFDM-PON can be used to smoothly evolve optical access networks to ultra-long

haul access networks.

2.3 Orthogonal Frequency Division Multiplexing (OFDM)

In optical systems, system designers have to deal with the inherent linear distortions that exist in

the fiber link (mainly in the form of chromatic dispersion and PMD. Despite optical fiber being

historically thought to be a virtually inexhaustible resource and with transmission rates being low

enough to render linear distortion effects negligible [21], this is not the norm in the context of

next-generation optical access. This is because as stated in introduction, there has been an

explosion of demand of subscribers for bandwidth-intensive applications that require multi-

Gbit/s data rates to support them. As data rates increase, both chromatic dispersion increases

quadratically with the data rate while PMD increases linearly with the data rate [30]. In addition,

recent research has shown that the optical fiber channel itself imposes some fundamental

capacity limits [31].

Considering all these, OFDM, a modulation format advantaged by its spectral efficiency,

robustness against delay, and ease of channel and phase estimation, made the transition into the

optical communications world where it was applied for long-haul fiber transmission at high data

rates of up to 100 Gbit/s for the length of 1000km [13] [11] and is now being used for optical

access applications [21].In this section, a review of general OFDM principles is provided to

appreciate the motivation behind applying OFDM techniques in optical communication systems.

While OFDM theory is extensive, an intuitive understanding may be gained by contrasting

OFDM with single carrier (SC) transmission and conventional frequency division multiplexing

(FDM). As shown in Figure 2-4 [21], the same overall data rate can be achieved either by serial

SC transmission over a broad frequency spectrum, or by parallel transmission on multiple,

narrowband spectral tributaries, i.e., via FDM. (It is noted that if the FDM subcarrier frequencies

were replaced by wavelengths, a traditional WDM setup would be obtained.)

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 10

Page 11: UNIT 1

Figure 2-4 Frequency domain spectra for (a)SC, (b)FDM, (c)OFDM signals

However, at very high symbol rates, the SC approach mandates such short symbol times that, in

any non-ideal linear channel, symbols will inevitably become lengthened by the convolution

with the channel’s non-ideal impulse response. The resulting symbol spreading is referred to as

dispersion [21]. Dispersion extends data symbols beyond their designated slot and into adjacent

symbol times, producing inter-symbol interference (ISI) that must be equalized at the receiver.

ISI effects moreover worsen with shorter because a given symbol is spread over more and more

adjacent symbols, and increasingly complicated receiver-side equalizers (i.e., filters) with a high

number of taps (i.e., coefficients) are needed. The advantage of the “parallelized” FDM approach

is that the symbols on the narrowband tributaries, or subcarriers, have longer durations, making

them less vulnerable to linear distortion effects that increase with the symbol rate, such as

chromatic dispersion (CD). This principle is also related to time-frequency duality: i.e., the

narrower a signal is in frequency, the wider (i.e., longer) it is in time. Consequently, the channel

delay (e.g., wireless multipath delay spread, CD-induced delay, etc.) becomes a small fraction of

the symbol time, T. As a result, ISI will affect at most one symbol, such that the channel

response over each narrowband subcarrier can be approximated as having a constant amplitude

and phase. Data symbols can then be recovered via one-tap (i.e., single coefficient) FDE. The

tradeoff for this benefit is a loss in spectral efficiency due to the insertion of non-data-carrying

spectral guard bands, ΔF, which are needed to separate the FDM subcarriers and prevent

interference that would otherwise arise from any frequency-domain subcarrier overlap.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 11

Page 12: UNIT 1

The orthogonality condition among subcarriers ΔF =1/T allows the recovery of the sent symbols

in spite of spectral overlap, thus recovering spectral efficiency, so that the symbols rate with

Orthogonal FDM is equivalent to that of the SC modulation, while retaining the desirable

qualities of FDM.

2.3.1 Single-carrier and multi-carrier modulation systems

There are two modulation techniques that are employed in modern communication systems.

These are single-carrier modulation and multi-carrier modulation [21]. In single-carrier

modulation, the information is modulated onto one carrier by varying the amplitude, frequency

or the phase of the carrier. For digital systems, this information is in the form of bits or symbols

(collection of bits). The signaling interval for a single-carrier modulation system equals the

symbol duration and the entire bandwidth is occupied by the modulated carrier (orthogonality

condition). As data rates increase, the symbol duration Ts becomes smaller. If Ts is smaller than

the channel delay spread τ, there will be significant ISI due to the memory of the dispersive

channel [32]and an error floor quickly develops. Consequently, the system becomes more

susceptible to loss of information from adverse conditions such as frequency selective fading due

to multipath, interference from other sources, and impulse noise. On the other hand, in multi-

carrier modulation systems such as frequency division multiplexing (FDM) systems, the

modulated carrier occupies only a fraction of the total bandwidth. In such systems, the

transmitted information at a high data rate is divided into lower-rate parallel streams, each of

these streams simultaneously modulating a different subcarrier. If the total data rate is Rs, each

parallel stream would have a data rate equal to Rs ⁄N. This implies that the symbol duration of

each parallel stream is N×Ts times longer than that the serial symbol duration; and much greater

than the channel delay spread τ. These systems are thus tolerant to ISI and are increasingly being

employed in modern communication systems. The amount of spectral saving in OFDM scheme

compare to conventional FDM scheme is illustrated in Figure 2-5.

Figure 2-5 FDM vs. OFDM modulation formats

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 12

Page 13: UNIT 1

2.3.2 Mathematical Basics of OFDM

In OFDM systems, any signal s(t) can be represented as

whereckiis the ithinformation symbol at the kthsubcarrier, sk(t) is the waveform for the

kthsubcarrier, Nscis the number of subcarriers, and Tsis the symbol period. sk(t-iTs)is selected

from a set of orthogonal functions in the sense that

whereδklor δijis a Kronecker delta function. One of the most popular choices of the function set

is windowed discrete tones given by

fkis the frequency of the kthsubcarrier, and, П(t) is the pulse shaping function. In such a scheme,

OFDM becomes a special class of multi carrier modulation (MCM), a general implementation of

it is illustrated in Figure 2-6. The optimum detector for each subcarrier could be a filter that

matches the subcarrier waveform, or a correlator matched to the subcarrier as shown in Figure 2-

6.

Figure 2-6 Conceptual diagram of a general multi carrier modulation system

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 13

Page 14: UNIT 1

Therefore the detected information symbol c’ikat the output of the correlator is given by

wherer(t) is the received signal in time domain. The major disadvantage of MCM is that it

requires excessive bandwidth. This is because, in order to design the filters and oscillators cost

effectively. The channel spacing has to be multiple times the symbol rate, greatly reducing the

spectral efficiency. Using orthogonal subcarriers was firstly presented by in [33]to achieve high

spectral efficiency transmission. The orthogonality can be verified from straight forward

correlation between any two subcarriers, given by

It can be seen if the following condition

is satisfied, then the two subcarriers are orthogonal to each other, i.e., <sk,sl>=1 only for k=l, and

<sk,sl>=0 for k≠l. This signifies that these orthogonal subcarrier sets, with their frequencies

spaced at multiple of the inverse of the symbol rate can be recovered with the matched filters

without inter carrier interference, in spite of strong signal spectral overlapping. Four different

frequency subcarriers are illustrated in 2-7.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 14

Page 15: UNIT 1

2-7 Four different frequency subcarriers

2.3.3 Discrete Four Transform Realization

A fundamental challenge with OFDM is that a large number of subcarriers are needed so that the

transmission channel appears to each subcarrier as a flat channel, in order to recover the

subcarriers with minimum signal processing complexity. This leads to an extremely complex

architecture involving many oscillators and filters at both transmit and receive end. In [34], they

first revealed that OFDM modulation/demodulation can be implemented by using inverse

discrete Fourier transform (IDFT)/discrete Fourier transform (DFT). Let’s temporarily omit the

index ‘i’in 2-1 to focus our attention on one OFDM symbol, and assume that we sample s(t) at

every interval of Ts/Nsc, and the mth sample of s(t) from the expression ( 2-1) becomes

Using orthogonality of 2-9 and the convention that

and some substitutions, we have

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 15

Page 16: UNIT 1

whereF stands for Fourier transform and m=[1,Nsc]. In a similar fashion at the receiver end, we

arrive at

wherermis the received signal sampled at every interval of Ts/Nsc. Therefore, the OFDM

modulation process is equivalent to applying the IFFT algorithm over the symbols to be sent and

then performing DAC.

2.3.4 Complex and Real Representations of an OFDM Signal:

At the very beginning and end of digital signal processing, the baseband OFDM signal is

represented as a complex value, but during transmission the OFDM signal becomes a real-valued

signal, more precisely, there is frequency up-conversion and frequency down-conversion

required for this complex-to-real value conversion, or baseband to passband conversion.

Mathematically, such transformation involves a complex multiplier (mixer) or IQ

modulator/demodulator, which at the up-conversion can be expressed as:

Figure 2-8 IQ modulator for up-conversion of a complex-valued baseband signal ‘c’ to a

real-valued passband signal ‘z’. The down-conversion follows the reverse process by

reversing the flow of ‘c ’ and ‘z ’.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 16

Page 17: UNIT 1

the passband signal s(t) is a real-valued signal at the center frequency of f, s(t) is the baseband

complex-valued signal, ‘Re’ and ‘Im’ stand for real and imaginary parts of a complex quantity.

Traditionally, the IQ modulator can be constructed with a pair of RF mixers and LOs with 90

degree shift as shown in Figure 2-8. The real-to-complex down-conversion of an OFDM signal

follows the reverse process of the up-conversion by reversing the flow of the baseband signal ‘c ’

and RF passband signal ‘z ’ in Figure 2-8. The IQ modulator/demodulator for optical OFDM

up/down conversion resembles, but is relatively more complicated than the RF counterpart [35].

It is usually implemented using Mach Zhender Modulators (MZM) [36].

2.3.5 Coder and Decoder modules

Figure 2-9 illustrates the stages of a conventional OFDM Coder and theits schematic for Decoder

is shown in Figure 2-10.As seem, in the coder, the incoming bit sequence is firstly parallelized

and modulated into complex symbols, usually applying a multilevel coding (M-QAM) which can

be different for every subcarrier. The iFFT algorithm then takes the OFDM symbol frame to the

time domain, and the CP is added, just before DAC and anti-aliasing filtering. A training symbol

insertion that is known OFDM symbol frames can be sent before each data packet for receiver

synchronization. In general case, two signals corresponding to the real and the imaginary parts of

the OFDM symbol are obtained from the baseband OFDM coding, which hare fed to the optical

modulation stage.

Figure 2-9 OFDM Coder

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 17

Page 18: UNIT 1

At the baseband decoder module, the reverse process is taken place in order to post-process and

recover data sent. The real and imaginary parts, of the received baseband OFDM signal provided

by the optical demodulation stage at the receiver are firstly low filter to avoid the alias at high

frequencies and sent to a pair of ADCs in order to be digitalized. The complex valued electrical

signal is then synchronized with the preamble added in the transmitter and CP extraction takes

place. The sequence is then converted from serial to parallel and demodulated with a fast Fourier

transform (FFT). Afterwards, zero-padded subcarriers and pilot tones are extracted. The channel

estimation using pilot tones and training sequence is taken place whose output is equalizer

coefficients. Each subcarrier is then demodulated according to the corresponding modulation

format and, finally the restored bit sequences are serialized to recover the information sequence

sent.

Figure 2-10 OFDM Decoder

2.3.6 Cyclic prefix

As a consequence of the channel delays, the information of a transmitted symbol is spread polluting

adjacent symbols in a phenomenon known as Intersymbol Interference (ISI). A time guard interval can

then be added between symbols in order to accommodate the polluted signal part, leaving a time interval

which only contains information from the useful data symbol which is not polluted. Moreover, a cyclic

extension of the symbols is required within the guard-time so that the ISI-free part of the symbol

maintains the orthogonality among subcarriers, thus avoiding ICI which is shown in Figure 2-11 .

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 18

Page 19: UNIT 1

Figure 2-11 Transmitted optical signal through the channel (left) and OFDM signals

without CP at the transmitter, without CP at the receiver and with CP at the receiver side

In Figure 2-11 in the right, where the DFT window is the OFDM sy,bol duration and tD is the

delay induced by the chromatic dispersion of the fiber, a system with three electrical subcarriers

and two OFDM symbols is depicted. As seen, part of the first OFDM symbol of the slower

signal is introduced into the observation window of the second symbol due to the delay spread

causing the aforementioned ISI. Moreover, considering that the first OFDM symbol of the

slower signals is incomplete the orthogonality breaks and ICI appears.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 19

Page 20: UNIT 1

UNIT- 3

PROPOSED APPROACH

3.1 REAL-TIME DSP-BASED OPTICAL TRANSCEIVERS

3.1.1 Transceiver Structure and Key Elements

The basic structure of a DSP-based optical transceiver is shown in Fig. 1. The key elements in

the transmitter are: highspeed digital logic for hardware-based DSP, high speed DAC for

conversion of the digital signal samples to an analog electrical signal, a wideband RF section to

amplify, filter and possibly upconvert the signal onto an RF carrier, and finally an electrical-

tooptical (EO) converter that converts the analog electrical signal into an optical signal for fiber

launching. The key elements in the receiver are: an optical-to-electrical (OE) converter to detect

the optical signal and convert to an electrical signal, a wideband RF section to filter, amplify and

possibly down-convert the signal, and a high speed ADC to convert the analog electrical signal

to digital samples for processing by the high-speed digital logic.The DSP functions must be

implemented in digital hardware due to the ultrahigh processing speeds necessary to support the

multi-Gb/s optical signals. It may be feasible to implement some transceiver DSP functions in

software which can operate by subsampling the received signal, (e.g., synchronization functions).

However, for the majority of DSP functions, it is essential to employ digital hardware operating

at clock speeds of several 100 MHz to achieve sufficient processing throughput. For prototyping

real-time DSP hardware, FPGAs offer the ideal solution due to their reprogrammability. This

enables rapid evaluation, exploration, and optimization of the hardware-based algorithms. The

high cost and power consumption of FPGAs, however, makes them inappropriate for the cost

and power sensitive PON applications. It is therefore necessary to employ custom designed

application specific integrated circuits (ASICs) for real-time DSP in commercial products. ASICs

obviously require significant capital investment for development but reap the benefits of low

costs associated with high volume mass production of integrated circuits. ASICs also offer the

advantage of significant power reduction compared to FPGAs. The DAC and ADC are highly

critical components in the transceiver and are discussed in more detail in Section II-B.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 20

Page 21: UNIT 1

The transceiver structure in Fig. 1 is for operation with baseband electrical signals. It is also

possible to construct a transceiver which employs modulation of a single RF upconverted signal

or multiple RF signals in a single or multiband architecture. Moreover, transceiver architectures

can combine baseband signals with RF up-converted signals. Generally speaking, in a multiband

transceiver, multiple DACs and ADCs are required according to the number of subbands and

also depending on whether in-phase/quadrature (IQ) modulation is used. As DACs/ADCs are

critical components, it is greatly beneficial if advanced DSP algorithms can be used to relax both

the requirements and the number of DACs/ADCs required by a specific transmission system. A

transceiver employing RF signals obviously requires more complex RF sections, and issues such

as RF carrier phase and frequency offsets must be addressed. Unless specifically stated, the

baseband transceiver will be considered throughout this paper. However, the multiband

transceiver architecture is discussed in more detail in Section VI where its advantages over the

single-band baseband transceiver are analyzed. It should be noted, however, that data conveyed

by all signals are generated and recovered at baseband regardless of the transmission frequency

band, such that the implemented DSP functionality is similar for all sub-bands. It is of course

possible to use ultrawideband DACs and ADCs for direct digital-to-RF conversion [8] thus

eliminating the analog RF front-ends, but this approach is, at least for the present time, most

likely too costly for application in cost-sensitive PONs. Due to the cost sensitivity of the optical

access network it is necessary to employ low-cost optical front ends. For low cost optics, the

intensity-modulation direct-detection (IMDD) technique [9] is unrivalled. IMDD operates by

either direct modulation or external modulation of a laser source. Directly modulated lasers

(DML) offer the lowest cost solution. However, DMLs suffer from the phenomenon of

frequency chirp [10] which can degrade transceiver performance compared with the almost

chirp-free external modulation scheme. For direct detection, a photodiode or avalanche

photodiode is employed which is a socalled square-law detector as the electrical current

generated is proportional to the square of the optical field and therefore the optical signal

intensity. The photodiode is followed by a transimpedance amplifier to convert the detected

current to a voltage for the following RF section. For ultralow cost IMDD optics, a highly

promising laser source is the vertical cavity surface emitting laser (VCSEL) [11] as these lasers

can be produced at extremely low cost mainly due to the reduced manufacturing processes

involved.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 21

Page 22: UNIT 1

Akey point to emphasize is that the utilization of DSP enables the use of low-cost optical

components in high performance optical transceivers, as their associated characteristics of

limited bandwidth, higher signal distortions, and wider tolerances can be compensated for by the

DSP algorithms either directly or indirectly through advanced modulation formats that are

moretolerant to the component deficiencies. For example, the high spectral efficiency and

variable signal spectrum of adaptively modulated OFDM, is able to fully utilize the lower

bandwidth, and adapt to the varying frequency response, of low cost optical components.

As with conventional optical transceivers, the downstream and upstream optical interfaces can

either operate at the same wavelength for a dual feeder fiber-based PON, or as is more typical, at

different wavelengths for operation with a single feeder fiber-based PON. Other advanced optical

transmission schemes, such as wavelength remodulation [12] and a lightwavecentralized

architecture [13] can also be employed with DSPbased optical transceivers.

3.1.2 DACs and ADCs

The DAC and ADC are highly critical components in DSP- based optical transceivers. The

required DAC/ADC basic characteristics are: high sample rates of the order of several GS/s, bit

resolutions in the region of 8 bits (modulation format dependent), high linearity and low

noise.DAC/ADC aspects that can have impact on transceiver performance include: quantization

noise due to the discrete signal levels, non-ideal linear behavior which causes the effective

number of bits (ENOB) to be lower than the physical resolution, and the ENOB decreasing with

signal frequency. The full-scale of the DAC/ADC should be utilized to minimize the effect of

quantization noise, which can necessitate automatic gain control (AGC) before the ADC. DACs

also typically have a characteristic roll-off in frequency response due to the inherent sin(x)/x

shaping due to the zeroorder-hold output format, as well as low pass filtering effects of the on-

chip analog front end. The sampling clock quality can moreover affect performance due to clock

jitter and frequency offset. It should be emphasized here that DSP algorithms can be exploited to

mitigate some of the nonideal DAC/ADC properties and/or relax the required DAC/ADC

performance requirements. The required DAC/ADC sampling rate for a given line rate of R

(bits/s) is dependent on the electrical spectral efficiency E (b/s/Hz) of the adopted modulation

format. The required signal bandwidth is B = R/E (Hz). Therefore assuming operation over the

entire Nyquist band and single-band transmission, therequired sampling rate is S = 2·B = 2·(R/E)

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 22

Page 23: UNIT 1

(samples/s). Fig. 2 shows graphically the variation of sample rate against line rate for different

spectral efficiencies. It can be seen, for example that if the sampling rate is limited to 20 GS/s, a

40 Gb/s line rate would require a modulation format with at least 4 b/s/Hz spectral efficiency.

Modulation formats with high spectral efficiency are thus important to minimize DAC/ADC

sample rates. Fig. 3 shows the bit resolution and sample rates of some commercial high speed

DACs and ADCs currently available. The trend in DAC/ADC sampling rates has shown a steady

growth over the last 5 years [14] and developments are generally led by the progress in high-end

test equipment such as digital sampling oscilloscopes (ADCs) and arbitrary waveform generators

(DACs). The DAC/ADC can contribute a significant portion of the total power consumption in

an optical transceiver [15], so this is obviously a key area to be addressed in the development of

future DAC/ADC targeted at access network applications.

Fig. 2. DAC/ADC sample rate versus line rate for different spectral efficiencies.

Fig. 3. Bit resolutions and sample rates of commercially available DAC/ADCs.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 23

Page 24: UNIT 1

3.2REAL-TIME DSP IMPLEMENTATION WITH FPGAS

3.2.1. FPGA Technology

State-of-the-art FPGAs are unrivalled as a development plat-form for high-speed real-time signal

processing. Modern FPGAs support features such as:

Vast array of logic elements

Ultra-high speed transceivers (10 s Gb/s); Huge resource of high speed IO (Gb/s);

Embedded memory

Dedicated multiplier units

High-performance embedded DSP blocks

Embedded hard functions such as phase-locked loops (PLLs)

Soft microprocessor support

To illustrate the performance available from high-end FPGAs, Table 1 summarizes the features

of Altera’s Stratix V family of FPGAs [16] implemented in 28 nm complementary metal– oxide–

semiconductor (CMOS) technology. Not only are some devices offering almost 1 million logic

elements and hundredsor thousands of dedicated DSP blocks they also support an immense

digital interface bandwidth which is essential for in-terfacing to the multi-GS/s DACs and ADCs.

The huge digital interface bandwidth is provided by the multi-Gb/s embedded transceivers,

which can offer bidirectional peak bandwidths of over 1 Tb/s.

3.2.2. Parallelism and Pipelining for High Speed DSP

For the DSP-based optical transceiver, analog signal sample rates are of the order of several

GS/s, whereas the digital logic can be clocked at speeds on the order of several 100 MHz. To

overcome this speed disparity parallelism and pipelining techniques must be fully exploited to

achieve the required pro-cessing throughput. Fig. 4 shows the principle of the technique.

Incoming digital samples at multi-GS/s from the ADC are first passed through a serial-to-parallel

(S/P) converter which gener-ates parallel samples at a reduced sample rate compatible with the

FPGA logic speed. In order to maintain the necessary sample throughput, the parallel samples

are processed simultaneously. Furthermore, to maximize the clock speed of a digital logic func-

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 24

Page 25: UNIT 1

tion, it is partitioned into a series of sequential functions with the intermediate samples stored by

registers, this is known as a pipelining. Each sequential function is then clocked simultane-ously

with its input samples taken from the registered outputs of the previous function. The maximum

achievable clock speed isthus determined by the function with the longest propagation de-lay,

which is significantly shorter than that of the corresponding nonpipelined function. Skillful

partitioning of the higher level function can thus enable maximization of a DSP function’s clock

frequency.

TABLE I

FEATURES OF ALTERA’S STRATIX V FPGA FAMILY

The sample throughput of a function (samples/s) is the prod-uct of the number of parallel

samples and the clock frequency. Thus, if more (less) parallel samples are employed for a given

throughput, the necessary clock speed is reduced (increased). The required logic resources are

proportional to the number of parallel samples. There can thus be a tradeoff between logic

resources and clock speed. As power consumption is a function of clock speed, it is possible to

tradeoffdie area and power consumption.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 25

Page 26: UNIT 1

The pipelining approach may lead to an increase in total func-tion propagation delay. However,

the high clock speeds involved mean that this is unlikely to cause unacceptable processing la-

tency. Pipelining a function increases clocking frequency, and as new output samples are

generated on every clock cycle, the overall function throughput is significantly increased.

Fig. 4.Parallel and pipelined processing.

3.2.3. FPGA Interfacing to DACs and ADCs

The multi-GS/s DACs and ADCs employed mean that ultra-high bandwidth digital interfaces

between the FPGA and the DAC/ADC are necessary. A 56 GS/s, 8-bit converter requires a

digital bandwidth of 448 Gb/s for example. As previously indi-cated, FPGAs offer large

resources of high speed input/output (I/O) supporting speeds in the order of 1 Gb/s, as well as

high speed digital transceivers supporting speeds of several 10 s Gb/s, as illustrated in Table I.

As the FPGA logic cannot operate at these speeds, high speed serializer and deserializer

(SERDES) circuits implemented in dedicated circuitry are employed in the FPGA, as illustrated

in Fig. 5. The SERDES circuits typically have a programmable range of parallelization ratios

thus per-mitting the logic array to operate with parallel data at a suitable clock frequency, which

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 26

Page 27: UNIT 1

is a sub-multiple of the interface clock frequency. Due to the high frequencies involved, the logic

sig-nals at the digital interface are typically differential logic carried by controlled impedance

signal pairs, thus requiring impedance matched interconnections. An example common interface

logic standard is low voltage differential signaling (LVDS), which has an impedance of 100 Ω

and logic levels of ±350 mV. As thedigital transceivers operate at higher frequencies than the

I/O, they also incorporate embedded circuits to ensure high signal integrity. These include clock

data recovery (CDR) and pro-grammable equalization in the individual deserializer inputs, and

programmable preemphasis at the individual serializer out-puts. To programme the equalization

and preemphasis features, characterization of the interconnection is of course necessary. The

SERDES can also have arbitrary phase offsets at power up, so it can be necessary to synchronize

all SERDES when initial-izing the system. Test pattern generation by the ADC may thus be

necessary in order to correctly synchronize the deserializers. Fig. 6 shows an example interface

between an FPGA and a 10 GS/s, 8-bit, 4 port ADC. The interface consists of 32 signals

operating at 2.5 GHz. 32 × 10:1deserializers are used in the FPGA to give 320 parallel signals at

250 MHz.

3.3 DSP IN OOFDM-BASED OPTICAL ACCESS NETWORKS

3.3.1. OFDM Modulation

OFDM is a multicarrier modulation (MCM) technique first proposed in the 1960s [18]–[20] but

at that time its implementation was impractical. Salz and Weinstein [21] first proposed the use of

the discreet Fourier transform (DFT) [22] for the generation of OFDM signals in 1969. It was not

until semiconductor electronics achieved sufficient processing power, however, that

implementation of OFDM with the DFT was feasible. Today the OFDM modulation technique is

widely adopted in numerous communication standards such as digital subscriber line (DSL) and

its many variants, wireless local area networks (WLAN) and digital audio and video broadcast

(DAB, DVB). OFDM is now widely recognized as a potential modulation technique for

application in future optical access networks [14], [23]–[28]. In [14], [23] authors provide an

extensive coverage of OFDM in optical communications. The following sections, of this paper,

provide an overview of the key principles of OFDM and adap- tive OFDM, pertinent to the real-

time OOFDM implementation presented in this paper.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 27

Page 28: UNIT 1

Fig.5 Block diagram of a generic multicarrier transmission system.

The spreading effect associated with a dispersive channel causes adjacent received symbols to

overlap, a phenomenon known as intersymbol-interference (ISI). To further improve the

dispersion tolerance of OFDM, an intersymbol gap can be inserted between two adjacent

symbols to avoid the ISI occurring in the wanted signal region. To ensure the temporal spreading

of the signal in the intersymbol gap does not distort the wanted signal region, each subcarrier is

simply extended into the intersymbol gap [23]. As the subcarriers are all cyclic the simplest way

to achieve this is to take an appropriate portion from the end of the symbol and prefix it to the

front of the symbol, this is thus known as a cyclic prefix (CP). The CP causes a transmission

overhead and so reduces the net bit rate, the length of the CP should thus be only as long as

necessary to eliminate ISI from the wanted signal region of the symbol and to also provide

sufficient system operation robustness. In addition to dispersion tolerance, OFDM has the

extremely beneficial characteristic of high spectral efficiency due to the subcarrier orthogonality

property allowing the subcarriers to overlap in the frequency domain without interference.

The primary advantage of OFDM over single-carrier schemes is its ability to cope with

severe channel conditions (for example,attenuation of high frequencies in a long copper wire,

narrowband interference and frequency-selective fading due to multipath) without complex

equalization filters. Channel equalization is simplified because OFDM may be viewed as using

many slowly modulated narrowband signals rather than one rapidly modulated wideband signal.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 28

Page 29: UNIT 1

The low symbol rate makes the use of a guard interval between symbols affordable, making it

possible to eliminate intersymbol interference (ISI) and utilize echoes and time-spreading (on

analogue TV these are visible as ghosting and blurring, respectively) to achieve a diversity gain,

i.e. a signal-to-noise ratio improvement. This mechanism also facilitates the design of single

frequency networks (SFNs), where several adjacent transmitters send the same signal

simultaneously at the same frequency, as the signals from multiple distant transmitters may be

combined constructively, rather than interfering as would typically occur in a traditional single-

carrier system.The following list is a summary of existing OFDM based standards and products.

For further details, see the Usage section at the end of the article.

Cable

ADSL and VDSL broadband access via POTS copper wiring,

DVB-C2, an enhanced version of the DVB-C digital cable TV standard,

Power line communication (PLC),

ITU-T G.hn, a standard which provides high-speed local area networking of existing home

wiring (power lines, phone lines and coaxial cables). [2]

TrailBlazer telephone line modems,

Multimedia over Coax Alliance (MoCA) home networking.

Wireless

The wireless LAN (WLAN) radio interfaces IEEE 802.11a, g, n, ac and HIPERLAN/2.

The digital radio systems DAB/EUREKA 147, DAB+, Digital Radio Mondiale, HD

Radio, T-DMB and ISDB-TSB.

The terrestrial digital TV systems DVB-T and ISDB-T.

The terrestrial mobile TV systems DVB-H, T-DMB, ISDB-T and MediaFLO forward link.

The wireless personal area network (PAN) ultra-wideband (UWB) IEEE

802.15.3a implementation suggested by WiMedia Alliance.

The OFDM based multiple access technology OFDMA is also used in several 4G and pre-

4G cellular networks and mobile broadband standards:

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 29

Page 30: UNIT 1

The mobility mode of the wireless MAN/broadband wireless access (BWA) standard IEEE

802.16e (or Mobile-WiMAX).

The mobile broadband wireless access (MBWA) standard IEEE 802.20.

the downlink of the 3GPP Long Term Evolution (LTE) fourth generation mobile broadband

standard. The radio interface was formerly named High Speed OFDM Packet

Access (HSOPA), now named Evolved UMTS Terrestrial Radio Access (E-UTRA).

Summary of advantages

High spectral efficiency as compared to other double sideband modulation schemes, spread

spectrum, etc.

Can easily adapt to severe channel conditions without complex time-domain equalization.

Robust against narrow-band co-channel interference.

Robust against intersymbol interference (ISI) and fading caused by multipath propagation.

Efficient implementation using Fast Fourier Transform (FFT).

Low sensitivity to time synchronization errors.

Tuned sub-channel receiver filters are not required (unlike conventional FDM).

Facilitates single frequency networks (SFNs); i.e., transmitter macrodiversity.

Summary of disadvantages

Sensitive to Doppler shift.

Sensitive to frequency synchronization problems.

High peak-to-average-power ratio (PAPR), requiring linear transmitter circuitry, which

suffers from poor power efficiency.

Loss of efficiency caused by cyclic prefix/guard interval.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 30

Page 31: UNIT 1

PASSIVE OPTICAL NETWORK (PON)

A passive optical network (PON) is a telecommunications network that uses point-to-multipoint

fiber to the premises in which unpowered optical splitters are used to enable a single optical fiber

to serve multiple premises. A PON consists of an optical line terminal (OLT) at the service

provider's central office and a number of optical network units(ONUs) near end users. A PON

reduces the amount of fiber and central office equipment required compared with point-to-point

architectures. A passive optical network is a form of fiber-optic access network.

In most cases, downstream signals are broadcast to all premises sharing multiple fibers.

Encryption can prevent eavesdropping.

Upstream signals are combined using a multiple access protocol, usually time division multiple

access

Fig.6 TDMA

FSAN and ITU

Starting in 1995, work on fiber to the home architectures was done by the Full Service Access

Network (FSAN) working group, formed by major telecommunications service providers and

system vendors.[1] The International Telecommunications Union (ITU) did further work, and

standardized on two generations of PON. The older ITU-T G.983standard was based

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 31

Page 32: UNIT 1

on Asynchronous Transfer Mode (ATM), and has therefore been referred to as APON (ATM

PON). Further improvements to the original APON standard – as well as the gradual falling out

of favor of ATM as a protocol – led to the full, final version of ITU-T G.983 being referred to

more often as broadband PON, or BPON. A typical APON/BPON provides 622 megabits per

second (Mbit/s) (OC-12) of downstream bandwidth and 155 Mbit/s (OC-3) of upstream traffic,

although the standard accommodates higher rates.

The ITU-T G.984 Gigabit-capable Passive Optical Networks (GPON) standard represented an

increase, compared to BPON, in both the total bandwidth and bandwidth efficiency through the

use of larger, variable-length packets. Again, the standards permit several choices of bit rate, but

the industry has converged on 2.488 gigabits per second (Gbit/s) of downstream bandwidth, and

1.244 Gbit/s of upstream bandwidth. GPON Encapsulation Method (GEM) allows very efficient

packaging of user traffic with frame segmentation.

By mid-2008, Verizon had installed over 800,000 lines. British Telecom, BSNL, Saudi Telecom

Company, Etisalat, and AT&T were in advanced trials in Britain, India, Saudi Arabia, the UAE,

and the USA, respectively. GPON networks have now been deployed in numerous networks

across the globe, and the trends indicate higher growth in GPON than other PON technologies.

G.987 defined 10G-PON with 10 Gbit/s downstream and 2.5 Gbit/s upstream – framing is "G-

PON like" and designed to coexist with GPON devices on the same network.[2]

Security

Developed in 2009 by Cable Manufacturing Business to meet SIPRNet requirements of the US

Air Force, secure passive optical network (SPON) integrates gigabit passive optical network

(GPON) technology and protective distribution system (PDS).

Changes to the NSTISSI 7003 requirements for PDS and the mandate by the US Federal

Government for GREEN technologies allowed for the US Federal Governmentconsideration of

the two technologies as an alternative to Active Ethernet and Encryption deviсes.

The chief information officer of the US Department of Army issued a directive to adopt the

technology by fiscal year 2013. It is marketed to the US military by companies such asTelos

Corporation.[3] [4] [5] [6]

IEEE

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 32

Page 33: UNIT 1

In 2004, the Ethernet PON (EPON or GEPON) standard 802.3ah-2004 was ratified as part of

the Ethernet in the first mile project of the IEEE 802.3. EPON uses standard 802.3 Ethernet

frames with symmetric 1 gigabit per second upstream and downstream rates. EPON is applicable

for data-centric networks, as well as full-service voice, data and video networks. 10 Gbit/s EPON

or 10G-EPON was ratified as an amendment IEEE 802.3av to IEEE 802.3. 10G-EPON supports

10/1 Gbit/s. The downstream wavelength plan support simultaneous operation of 10 Gbit/s on

one wavelength and 1 Gbit/s on a separate wavelength for operation of IEEE 802.3av and IEEE

802.3ah on the same PON concurrently. The upstream channel can support simultaneous

operation of IEEE 802.3av and 1 Gbit/s 802.3ah simultaneously on a single shared (1,310 nm)

channel.

There are currently[when?] over 40 million installed EPON ports making it the most widely

deployed PON technology globally. EPON is also the foundation for cable operators’ business

services as part of the DOCSIS Provisioning of EPON (DPoE) specifications.

Network elements

A PON takes advantage of wavelength division multiplexing (WDM), using one wavelength for

downstream traffic and another for upstream traffic on a single mode fiber (ITU-T G.652).

BPON, EPON, GEPON, and GPON have the same basic wavelength plan and use the 1,490

nanometer (nm) wavelength for downstream traffic and 1,310 nm wavelength for upstream

traffic. 1,550 nm is reserved for optional overlay services, typically RF (analog) video.

As with bit rate, the standards describe several optical budgets, most common is 28 dB of loss

budget for both BPON and GPON, but products have been announced using less expensive

optics as well. 28 dB corresponds to about 20 km with a 32-way split. Forward error

correction (FEC) may provide another 2–3 dB of loss budget on GPON systems. As optics

improve, the 28 dB budget will likely increase. Although both the GPON and EPON protocols

permit large split ratios (up to 128 subscribers for GPON, up to 32,768 for EPON), in practice

most PONs are deployed with a split ratio of 1x32 or smaller.

A PON consists of a central office node, called an optical line terminal (OLT), one or more user

nodes, called optical network units (ONUs) or optical network terminals (ONTs), and the fibers

and splitters between them, called the optical distribution network (ODN). “ONT” is an ITU-T

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 33

Page 34: UNIT 1

term to describe a single-tenant ONU. In multiple-tenant units, the ONU may be bridged to a

customer premises device within the individual dwelling unit using technologies such as Ethernet

over twisted pair, G.hn (a high-speed ITU-T standard that can operate over any existing home

wiring - power lines, phone lines and coaxial cables) or DSL. An ONU is a device that

terminates the PON and presents customer service interfaces to the user. Some ONUs implement

a separate subscriber unit to provide services such as telephony, Ethernet data, or video.

An OLT provides the interface between a PON and a service provider′s core network. These

typically include:

IP traffic over Fast Ethernet, gigabit Ethernet, or 10-gigabit Ethernet;

Standard TDM interfaces such as SDH/SONET;

ATM UNI at 155–622 Mbit/s.

The ONT or ONU terminates the PON and presents the native service interfaces to the user.

These services can include voice (plain old telephone service (POTS) or voice over IP (VoIP)),

data (typically Ethernet or V.35), video, and/or telemetry (TTL, ECL, RS530, etc.) Often the

ONU functions are separated into two parts:

The ONU, which terminates the PON and presents a converged interface—such

as DSL, coaxial cable, or multiservice Ethernet—toward the user;

Network termination equipment (NTE), which inputs the converged interface and outputs

native service interfaces to the user, such as Ethernet and POTS.

A PON is a shared network, in that the OLT sends a single stream of downstream traffic that is

seen by all ONUs. Each ONU only reads the content of those packets that are addressed to it.

Encryption is used to prevent eavesdropping on downstream traffic.

Upstream bandwidth allocation

The OLT is responsible for allocating upstream bandwidth to the ONUs. Because the optical

distribution network (ODN) is shared, ONU upstream transmissions could collide if they were

transmitted at random times. ONUs can lie at varying distances from the OLT, meaning that the

transmission delay from each ONU is unique. The OLT measures delay and sets a register in

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 34

Page 35: UNIT 1

each ONU via PLOAM (physical layer operations and maintenance) messages to equalize its

delay with respect to all of the other ONUs on the PON.

Once the delay of all ONUs has been set, the OLT transmits so-called grants to the individual

ONUs. A grant is permission to use a defined interval of time for upstream transmission. The

grant map is dynamically re-calculated every few milliseconds. The map allocates bandwidth to

all ONUs, such that each ONU receives timely bandwidth for its service needs.

Some services – POTS, for example – require essentially constant upstream bandwidth, and the

OLT may provide a fixed bandwidth allocation to each such service that has been

provisioned. DS1 and some classes of data service may also require constant upstream bit rate.

But much data traffic, such as browsing web sites, is bursty and highly variable.

Through dynamic bandwidth allocation (DBA), a PON can be oversubscribed for upstream

traffic, according to the traffic engineering concepts of statistical multiplexing. (Downstream

traffic can also be oversubscribed, in the same way that any LAN can be oversubscribed. The

only special feature in the PON architecture for downstream oversubscription is the fact that the

ONU must be able to accept completely arbitrary downstream time slots, both in time and in

size.)

In GPON there are two forms of DBA, status-reporting (SR) and non-status reporting (NSR).In

NSR DBA, the OLT continuously allocates a small amount of extra bandwidth to each ONU. If

the ONU has no traffic to send, it transmits idle frames during its excess allocation. If the OLT

observes that a given ONU is not sending idle frames, it increases the bandwidth allocation to

that ONU. Once the ONU's burst has been transferred, the OLT observes a large number of idle

frames from the given ONU, and reduces its allocation accordingly. NSR DBA has the

advantage that it imposes no requirements on the ONU, and the disadvantage that there is no way

for the OLT to know how best to assign bandwidth across several ONUs that need more.

In SR DBA, the OLT polls ONUs for their backlogs. A given ONU may have several so-called

transmission containers (T-CONTs), each with its own priority or traffic class. The ONU reports

each T-CONT separately to the OLT. The report message contains a logarithmic measure of the

backlog in the T-CONT queue. By knowledge of the service level agreement for each T-CONT

across the entire PON, as well as the size of each T-CONT's backlog, the OLT can optimize

allocation of the spare bandwidth on the PON.EPON systems use a DBA mechanism equivalent

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 35

Page 36: UNIT 1

to GPON's SR DBA solution. The OLT polls ONUs for their queue status and grants bandwidth

using the MPCP GATE message, while ONUs report their status using the MPCP REPORT

message.

Enabling technologies

Due to the topology of PON, the transmission modes for downstream (that is, from OLT to

ONU) and upstream (that is, from ONU to OLT) are different. For the downstream transmission,

the OLT broadcasts optical signal to all the ONUs in continuous mode (CM), that is, the

downstream channel always has optical data signal. However, in the upstream channel, ONUs

can not transmit optical data signal in CM. Use of CM would result in all of the signals

transmitted from the ONUs converging (with attenuation) into one fiber by the power splitter

(serving as power coupler), and overlapping. To solve this problem, burst mode (BM)

transmission is adopted for upstream channel. The given ONU only transmits optical packet

when it is allocated a time slot and it needs to transmit, and all the ONUs share the upstream

channel in the time division multiplexing (TDM) mode. The phases of the BM optical packets

received by the OLT are different from packet to packet, since the ONUs are not synchronized to

transmit optical packet in the same phase, and the distance between OLT and given ONU are

random. Since the distance between the OLT and ONUs are not uniform, the optical packets

received by the OLT may have different amplitudes. In order to compensate the phase variation

and amplitude variation in a short time (for example within 40 ns for GPON [9]), burst mode clock

and data recovery (BM-CDR) and burst mode amplifier (for example burst mode TIA) need to

be employed, respectively. Furthermore, the BM transmission mode requires the transmitter to

work in burst mode. Such a burst mode transmitter is able to turn on and off in short time. The

above three kinds of circuitries in PON are quite different from their counterparts in the point-to-

point continuous mode optical communication link.

Fiber to the premises

Passive optical networks do not use electrically powered components to split the signal. Instead,

the signal is distributed using beam splitters. Each splitter typically splits the signal from a single

fiber into 16, 32, or 64 fibers, depending on the manufacturer, and several splitters can be

aggregated in a single cabinet. A beam splitter cannot provide any switching or buffering

capabilities and doesn't use any power supply; the resulting connection is called a point-to-

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 36

Page 37: UNIT 1

multipoint link. For such a connection, the optical network terminals on the customer's end must

perform some special functions which would not otherwise be required. For example, due to the

absence of switching, each signal leaving the central office must be broadcast to all users served

by that splitter (including to those for whom the signal is not intended). It is therefore up to the

optical network terminal to filter out any signals intended for other customers. In addition, since

splitters have no buffering, each individual optical network terminal must be coordinated in

a multiplexingscheme to prevent signals sent by customers from colliding with each other. Two

types of multiplexing are possible for achieving this: wavelength-division multiplexing and time-

division multiplexing. With wavelength-division multiplexing, each customer transmits their

signal using a unique wavelength. With time-division multiplexing (TDM), the customers "take

turns" transmitting information. TDM equipment has been on the market longest. Because there

is no single definition of "WDM-PON" equipment, various vendors claim to have released the

'first' WDM-PON equipment, but there is no consensus on which product was the 'first' WDM-

PON product to market.

Passive optical networks have both advantages and disadvantages over active networks. They

avoid the complexities involved in keeping electronic equipment operating outdoors. They also

allow for analog broadcasts, which can simplify the delivery of analog television. However,

because each signal must be pushed out to everyone served by the splitter (rather than to just a

single switching device), the central office must be equipped with a particularly powerful piece

of transmitting equipment called an optical line terminal (OLT). In addition, because each

customer's optical network terminal must transmit all the way to the central office (rather than to

just the nearest switching device), reach extenders would be needed to achieve the distance from

central office that is possible with outside plant based active optical networks.Optical

distribution networks can also be designed in a point-to-point "homerun" topology where

splitters and/or active networking are all located at the central office,allowing users to be patched

into whichever network is required from the optical distribution frame.

3.3.2. Adaptively Modulated OFDM

An important characteristic of OFDM is the ability to modulate each subcarrier independently

[31], [32] which allows the signal to adapt to the spectral characteristics of the complete

transmission channel which includes the fiber and transceiver components. As for any modulated

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 37

Page 38: UNIT 1

signal its bit error rate (BER) performance is dependent on the received signal-to-noise ratio

(SNR) thus for a desired BER there is a corresponding minimum SNR requirement. The

minimum SNR will be modulation format dependent, as the number of encoded bits increases

the signal’s tolerance to noise and distortion decreases thus the minimum required SNR will

increase. Each OFDM subcarrier can experience different noise and distortion due to their

frequency dependent nature thus SNR is subcarrier frequency dependent. There are two basic

methods to ensure the minimum SNR is achieved for a specific subcarrier. First, for a fixed

modulation format an individual subcarrier’s transmitted power level can be adjusted to achieve

the minimum SNR at the receiver. Second, if the transmitted subcarrier power is fixed the SNR

at the receiver cannot be adjusted, however the modulation format adopted on a particular

subcarrier can be varied to change the minimum required SNR to be below but as near as

possible to the actual SNR.

3.3REAL-TIME DSP FOR OPTICAL OFDM

Fig.7 Real time DSP for optical OFDM

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 38

Page 39: UNIT 1

Tabular Format

3.3.1. Implementation

The DSP architecture of Bangor’s most recent real-time OOFDM transceiver design, based on

Altera’s Stratix II GX FPGAs, is shown in Fig. 13 and key transceiver parameters are presented

in Table II. The architecture is similar to the functional DSP architecture shown in Fig. 11. The

key differences are discussed here. In the transmitter, a data generator block is implemented to

provide parallel pseudorandom binary data for transmission, 15 adaptive modulators are

employed each of which can perform either 16, 32, 64, or 128-QAM encoding by selecting one

of four distinct encoders [34], and a power loading block allows live adjustment of individual

subcarrier power. The 32 point IDFT is implemented with an inverse fast Fourier transform

(IFFT),s this is a resource efficient form of the IDFT as discussed in Section V-B. The clipping

and quantization block have an online adjustable clipping level for live optimization. After the

cyclic prefix is added the signed digital samples are converted to unsigned samples required by

the DAC. This block also inserts a low power synchronization signal for symbol alignment as

described in Section V-D. The samples and bits are then correctly organized for interfacing to

32×10:1 serializers feeding 32 I/O operating at 1 GHz. The interface to the DAC thus consists of

4 × 8 bit ports such that 4 digital samples are transferred in parallel to the 4 GS/s DAC. All

online controlled parameters are controlled via embedded memory accessed via the FPGA’s

Joint Test Action Group (JTAG) [55] interface.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 39

Page 40: UNIT 1

In the receiver, the ADC interface is the reverse of the DAC interface such that 32 1 GHz I/O

feed 32×10:1 deserializers. A reorganization block restructures the parallel samples correctly.

These samples are used by the symbol offset detection block to determine the arbitrary sample

offset from the symbol boundaries (described in Section V-D). The incoming samples are thus

realigned to the symbol boundary and the cyclic prefix removed to provide 32 real-valued time

domain samples corresponding to one OFDM symbol. The 32 real-valued samples are converted

to 32 signed complex samples by removing the DC offset added by the ADC and setting all

imaginary components to zero. The 32 complex time domain samples are then fed to a 32 point

FFT. The FFT is used as it provides a highly efficient implementation of the DFT. From the 32

complex frequency domain coefficients output from the FFT, only the 15 positive frequencies are

selected. These correspond to the 15 data-encoded frequency domain subcarriers. The pilot

detection block operates on the 15 frequency domain subcarriers to detect the pilot subcarriers

which are used by the channel estimation block to determine the CTF as described in Section V-

C. The 15 subcarriers will subsequently be equalized using the estimated CTF. The encoded

binary data are then decoded.

3.3.2. IFFT and FFT

To explicitly compute xn and X from the definitions of an N point IDFT and DFT, as given in (5)

and (6) respectively, would require N2 k complex multiplications and N 2 –N complex additions.

For a hardware-based implementation of the transforms, it is highly advantageous to minimize

computational complexity in order to minimize design complexity. Furthermore, the extremely

high IDFT/DFT real-time computational throughput inherent to OOFDM implies that a highly

parallel and pipelined architecture is necessary. This makes it difficult to reuse complex

functions for more than one calculation during one transform cycle. Therefore, minimizing the

number of discrete instances of complex functions in the algorithm is vitally important if chip

cost and power consumption targets are to be met. The fast Fourier transform (FFT) and inverse

FFT (IFFT) are highly computationally efficient algorithms for computing the DFT and IDFT,

respectively. The FFT was first introduced by Tukey and Cooley in 1965 in their seminal paper

“An algorithm for the Machine Calculation of Complex Fourier Series” [59]. The drastic

reduction in computational complexity offered by the FFT and IFFT makes them highly

appropriate for implementation in physical hardware and thus ideal for use in real-time OOFDM

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 40

Page 41: UNIT 1

Fig. 8. Structure of a 16-point radix-2 decimation-in-time FFT.

transceivers. The IFFT can be created from an FFT by simple modification. Therefore, the

following discussions will concentrate on the FFT although they are equally applicable to the

IFFT. When the original sequence and all subsequences are equally divided at each step, this is a

radix-2 FFT with N = 2M , where M is the number of recombination steps. Other radices are cre-

ated when the sequence is split into more than two subsequences at each stage. For example, if

the original sequence is first split into four subsequences, this is repeated for M = log4 N steps.

In this case N = 4M and so it is a radix-4 FFT. The possibleradix values that can be employed are

therefore dependent of the required value of N . To allow more flexibility in the value of N , it is

also possible to create mixed radix FFTs. A detailed examination of the conversion of one N -

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 41

Page 42: UNIT 1

point DFT into two N /2-point DFTs will be presented as this also explains the origin of the

fundamental building block of the FFT: the butterfly operator. If the original time domain

sequence xn is split into its even and odd sequences of yn = x2n and zn = x2n + 1 , respectively,

for n = 0,1,..,(N /2)–1 and substituted into

The DFT as defined becomesfor k = 0,1,..(N –1) and ωN = e−j 2π /N .can be rewritten using the

relation ωN 2n k = ωNnk/ 2 as

The original DFT has now been expressed as a simple com-bination of two DFTs each of length

N /2. The DFT on the right of (9) is multiplied by the factor ωNk which accounts for the relative

time shift between the sub-sequences and is known as the twiddle factor. If these N /2-point

DFTs are denoted as Yk and Zk respectively we can write

and

for k = 0,1,..,(N /2)–1. (11) can be further simplified as ωN N / 2 = −1 and Yk and Zk have a

period of N /2, which gives the following pair of equations known as butterfly operators:

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 42

Page 43: UNIT 1

The selected splitting method can affect the order of the se-quence reordering and the sequence

recombination. The afore-mentioned example splits the original sequence and the subse-quences

into even and odd sequences. This requires the input coefficients xn to first be reordered and then

the sequences to berecombined. As xn is reordered, this type of FFT architecture is known as

decimation-in-time. If, however, the splitting method divides according to the first-half and last-

half subsequences, xn and xn + N / 2 , where n = 0,1,..,(N /2)–1, the FFT first performs the

recombination of the naturally ordered input sequence and then reorders the Xk coefficients. This

type of FFT architecture is therefore known as decimation-in-frequency.

Fig. 9. Radix-2 decimation in time butterfly element

The selected architecture for the FFT and IFFT implemented in the real-time OOFDM

transceiver is based on the fact that a 32-point DFT is required. A radix-2 FFT can be used as N

= 32 = 25 . There is no difference in the complexity of decimation-in-time and decimation-in-

frequency, therefore decimation-in-time is selected. The implemented FFT architecture is

therefore the Cooley–Tukey radix-2 decimation-in-time.To implement the IFFT it is only

necessary to modify the twiddle factors, which is apparent from the opposite sign of the

exponential power in the IDFT in (5) compared to the DFT .Thus, for the IFFT, the twiddle

factors are now ωNk . To convert the FFT function to an IFFT function, the twiddle factorvalues

are thus simply replaced with their complex conjugates.It is important to consider the savings in

computational com-plexity achieved by the implemented FFT architecture compared to the

explicit computation according to the DFT definition .

Explicit computation requires N 2 complex multiplications and N 2 − N complex additions,

whereas a radix-2 N -point FFT will have (N /2)log2 N butterfly operators, each consisting of

one complex multiplier and two complex additions. Thus, in total there are (N /2)log2 N

complex multipliers and N log2 N com-plex additions. For the case of the implemented 32-point

FFT, the computational saving is ∼92% for the complex multiplica-tions and ∼84% for the

complex additions. The actual saving is higher when taking into account the instances where the

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 43

Page 44: UNIT 1

twid-dle factors are unity. The immense computational efficiency of the IFFT is clear from the

savings achieved. Furthermore, the computational saving increases further with higher values of

N . For a hardware-based implementation of the DFT, the FFT is therefore indispensable due to

the advantages associated with the vast reduction in the required logic resources. The FFT and

IFFT will still constitute one of the largest logic functions, if not the largest logic function in the

OOFDM transceiver, and so the optimization of the FFT logic is an important issue.

As an example, Fig. 8 shows the structure of a 16 point radix-2 decimation-in-time FFT. The

repeated divide by 2 structure is apparent in the right-to-left direction and the decimation-in-time

architecture results in the reordering of the xn values. An important issue with hardware

implementation is bit resolution control of the intermediate stage sample values. As the butterfly

elements within each stage contain multiplication and addition functions, the bit resolutions of

the output samples will increase at each stage. If this is not restricted, the final stage will have an

excessive bit resolution and so large logic resources will be con-sumed. It is thus critical that the

intermediate sample resolutions are truncated to limit the excessive escalation of sample bit res-

olutions while maintaining sufficient calculation precision. As shown in Fig. 8, sample resolution

reduction must be built into the FFT structure between stages. Another important factor which

can affect logic resource usage and calculation precision is the bit resolution of the twiddle factor

values. This must becarefully selected as overly high resolution can cause excessive use of logic,

whereas overly low resolution can cause insuffi-cient calculation precision. The implemented

twiddle factor is a 6 bit signed complex value.These butterfly operators are the fundamental FFT

building blocks used at the recombination stages of the radix-2 FFT, and are depicted by the

example symbol shown in Fig. 9. To convert two subsequencesYk and Zk to the single N -point

sequence Xk will thus require N /2 discreet butterfly operators.Different radix values and

sequence splitting methods will have their own corresponding butterfly elements. The radix-4

butterfly, for example, has 4 coefficient inputs, three twiddle factor inputs, and 4 coefficient

outputs.

3.3.3. Pilot Detection, Channel Estimation, and Equalization

The frequency response of the transmission channel introduces subcarrier amplitude and phase

changes during transmission. The received signal is therefore no longer a direct representation of

the transmitted signal. To compensate the effect of the channel response, the inverse channel

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 44

Page 45: UNIT 1

response is applied in the receiver, which is termed channel equalization. In order to perform

equalization, the CTF must be estimated. An advantage of OFDM is that channel equalization

can be extremely simple. As the amplitude and phase of each subcarrier are determined at a

discrete frequency, the CTF only needs to be known at the corresponding frequency to allow the

subcarrier to be equalized. Equalization can then be achieved by a single complex multiplication

in the frequency domain.

Fig. 10. Subcarrier equalization to compensate for channel induced amplitude and phase

changes

Fig. 11. Pilot and data bearing subcarrier mapping in the OOFDM time-frequency symbol

space.

The corresponding received pilot symbol Rk is

where the received subcarrier amplitude and phase are Bk , and φk respectively and Wk is the

noise component of the kth subcarrier after the receiver FFT. The CTF in the frequency

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 45

Page 46: UNIT 1

domain,Hk , is then determined as

the CTF is thus estimated as

Large pilot symbol amplitude therefore reduces error due to noise. To further reduce the effect of

channel noise, the estimated CTF can be averaged over many pilot symbols aslong as the channel

can be considered to be static over the averaging period. To equalize the received frequency

domain complex data, dk_ ,m , encoded onto the kth subcarrier, a singlemultiplication by the

inverse CTF estimate, Hˆ −1, is applied.

The equalized encoded complex data value dk,m defined as

The subcarrier equalization principle is illustrated in Fig. 16. The real-time OOFDM

transceiver implements pilot subcarrier-based channel estimation in the following way. In the

transmitter, the pilot insertion function follows the parallel data generator, such that one extra

parallel bit sequence of a fixed pattern, representing known pilot subcarrier data, is diag-onally

mapped into the OOFDM time-frequency symbol space as shown in Fig. 10. Mathematically, the

pilot and data-bearing subcarrier mapping onto the frequency domain subcarriers Xk,mcan be

expressed as

Wherepk ,m and dk ,m are the encoded complex pilot and data values, respectively, and Ns is the

total number of data bearing subcarriers (Ns is 15 in this case). The diagonal pilot mapping

approach was adopted as it has the advantage that no buffering of the incoming data is required

when all subcarriers carry the same number of bits. However, it is still necessary to direct the Ns

–1 incoming data streams and the single pilot data value to the appropriate subcarriers on a per

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 46

Page 47: UNIT 1

OFDM symbol basis.

In the receiver, the 15 data-bearing OFDM subcarriers in the positive frequency bins are selected

for channel estimation and subsequent data recovery at the FFT output. When transmission is

first established, the pilot detection block must locate the symbols such that the first subcarrier is

a pilot subcarrier. These symbols are regarded as pilot subcarrier reference points relative to

which all other pilot subcarriers can be located. At the output of the FFT, the identification of the

received pilot subcarriers is first made by performing operations (20) and (21) to subcarrier 1 of

consecutive symbols, where

such that Xm ,1 (X(∗m + N s),1 ) is the received complex (complex conjugate) value of

subcarrier 1 of the mth [(m+NS ) th] symbol.C is a preset integer number determining the total

number of Ns symbol-spaced D values used for averaging. The magnitude squared function is

used in (21) as this gives a real-valued Q value to simplify peak detection, and isalso easier to

compute than the absolute magnitude which would require a square-rootoperation. As the pilot

mapping sequence repeats every NS symbols, Qm ,1 must be determined for NS adjacent

symbol locations. For the implemented design with NS = 15, 15 values of Qm ,1 for consecutive

symbols positions must be determined.

The data-bearing subcarriers are modulated with complex values encoded using a random data

sequence. This results in minimized Q values due to the averaging process. On the other

hand,each of the pilot subcarriers is modulated with a fixed complex number of maximal

amplitude, causing the occurrence of a Qpeak corresponding to the symbol locations where

subcarrier1 is the pilot subcarrier, as illustrated in Fig. 18. A large C will make the Q peak more

distinguishable, but this requires alonger time and more logic resources to conduct the averaging

operation, such that C should be optimized. Experimental mea-surements show that C = 16 is

adequate for reliable detection of pilot subcarriers. By locating the peak in the 15 detected and

stored Q values the symbols are identified where subcarrier 1 is the pilot. Based on this reference

pilot, all other pilot subcarriers in subsequent symbols can be easily identified due to their fixed

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 47

Page 48: UNIT 1

relative positions. In the implemented design, the pilot detection function operates continuously.

However, after identifying thereference pilot subcarriers, the pilot detection process could be

terminated and only needs to be reactivated following a break in the transmission.

Making use of the known transmitted pilot subcarriers and the received pilot subcarriers, the

channel estimation block deter-mines the complex CTF, Hk (k = 1, 2, . . .,Ns ), by performing the

operation defined .

where R(k + iN s),k (P(k + iN s),k ) is the received (assigned) com-plex value of the kth

pilot subcarrier in the (k + iNs ) th symbol.

As a constant power, PC , is assigned to the pilot subcarriers the simplified expression on the

right of (22) is used. To reduce the noise effect associated with the transmission system,

frequency response averaging is performed over M pilot subcarriers at each frequency. Here, M

is taken to be 32, which is an op-timum value identified experimentally [61]. Thus, to compute

Hk , parallel summation functions with suitable scaling are im-plemented over 32 pilots for each

subcarrier. The 15 computed complex values forming the CTF are stored and fed to the chan-nel

equalization block with new values continuously computed every 32 symbols.

The CTF obtained in the channel estimation function is then used by the channel equalization

block to equalize each individ-ual subcarrier using the following operation:

WhereXm ,k is the received complex value of the kth unequal-ized subcarrier in the mth symbol.

The channel equalization function thus consists of 15 parallel complex dividers. The equalized

subcarriers, Xm ,k , provide the inputs to the 15 parallel adaptive demodulators.The real and

imaginary parts of the 15 complex CFT parame-tersHk determined by the channel estimation

block are probed by the Signal Tap II embedded logic analyzer. H1 to H15 are extracted by the

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 48

Page 49: UNIT 1

Signal Tap IIapplication in order to view the live system frequency response from IFFT input to

FFT output. This feature is utilized in combination with individual subcarrier BER

measurements to manually determine suitable levels for the variable power loading profile

employed in the transmitter.

Fig. 12. Pilot subcarrier identification using Q peaks after FFT in the receiver.

It is important to note that, due to the quasi-static nature of the optical channel, a low CTF

estimate update rate can be employed without degrading system performance. The channel

estimation technique can therefore only insert pilot data in periodic bursts of pilot subcarriers.

This allows all 15 subcarriers to be used for data transmission between pilot bursts. The insertion

rate of the pilot bursts can be as low as 10 Hz [61], corresponding to an extremely low overhead

of 0.001% for the channel estimation function.

3.3.4 Symbol Synchronization

Symbol timing offset (STO) is the difference between the correct symbol start position and the

estimated symbol start position. Symbol synchronization is necessary to minimize STO, which is

ideally zero, as nonzero STO leads to degraded BER performance if the processed samples do

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 49

Page 50: UNIT 1

not all originate from the same symbol. It should be noted, however, that if the CP length

exceeds the ISI length by y samples, an STO of up to y samples can be tolerated without

performance degradation. STO tolerance can thus be improved by increasing CP length. A DSP-

based symbol synchronization method has been experimentally demonstrated that is highly

suitable for application in OOFDM multiple access-based passive optical networks (OOFDMA-

PONs). This is because the technique can achieve symbol, timeslot, and frame alignment of an

optical network unit’s (ONU’s) upstream and downstream signals without the need to interrupt

existing ONU traffic.

The corresponding received signal SR X can be written as

where SN represents system noise.

In the receiver, a cross-correlation method is used to detect theposition of SA_ LIG N . A signal

SC O R R is generated which has an identically shaped waveform to SA LIG N and amplitude of

±1 to simplify computation. By computing the cross-correlation between SR X and SC O R R ,

symbol alignment offset can be de-termined based on the location of the correlation peaks. This

is because there is no correlation between SC O R R and eitheror SN due to their Gaussian

random characters-cross-correlation is therefore entirely dependent onS_ A LIG N An

arbitrarily positioned sequence of 2 M.Z samples is processed, where Z is the total number of

samples in an OOFDM symbol, and M is a sufficiently large integer selected to give clear

correlation peaks. As SA LIG N is cyclic, a symbol summation, or accumulation, can be

performed before the cross-correlation. A signal SSU M is calculated whereSSU M (n) is the

nth sample within SSU M and n = 1 to Z. SSU M is thus the sum of M sequences of Z

consecutive samples spaced at intervals of 2•Z samples. If M is large enough, the waveform of

SSU M will take on the shape of SA LIG Nas the Gaussian random characteristics of S_O O FD

M and SN result in their summations both tending to zero. The exact shape of SSU M will

depend on the symbol alignment offset relative to the arbitrarily selected samples. Signal

transitions from positive to negative, and vice-versa, will thus coincide with the OOFDMsymbol

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 50

Page 51: UNIT 1

boundaries. The cross-correlation is then performed between SSU M and SC O R R with the

relative offset, v, of SC O R Rvaried from 0 to (2•Z)–1 and the correlation value COR(ν) for

each offset calculated using (27). The sequence of values COR(0) to COR(2Z–1) provides a

correlation profile, CPRO F,where the position of the peaks indicates the offset where the highest

correlation between S_A LIG N and SC OIn the receiver, a cross-correlation method is used to

detect theposition of SA_ LIG N . A signal SC O R R is generated which has an identically

shaped waveform to SA LIG N and amplitude of ±1 to simplify computation. By computing the

cross-correlation between SR X and SC O R R , symbol alignment offset can be determined

based on the location of the correlation peaks. This is because there is no correlation between SC

O R R and eitheror SN due to their Gaussian random characteristic-cross-correlation is therefore

entirely dependent onS_ A LIG N . An arbitrarily positioned sequence of 2 M.Z samples is

processed, where Z is the total number of samples in an OOFDM symbol, and M is a sufficiently

large integer selected to give clear correlation peaks. As SA LIG N is cyclic, a symbol

summation, or accumulation, can be performed before the cross-correlation. A signal SSU M is

calculated whereSSU M (n) is the nth sample within SSU M and n = 1 to Z. SSU M is thus the

sum of M sequences of Z consecutive samples spaced at intervals of 2•Z samples. If M is large

enough, the waveform of SSU M will take on the shape of SA LIG Nas the Gaussian random

characteristics of S_O O FD M and SN result in their summations both tending to zero. The

exact shape of SSU M will depend on the symbol alignment offset relative to the arbitrarily

selected samples. Signal transitions from positive to negative, and vice-versa, will thus coincide

with the OOFDMsymbol boundaries. The cross-correlation is then performed between SSU M

and SC O R R with the relative offset, v, of SC O R Rvaried from 0 to (2•Z)–1 and the

correlation value COR(ν) for each offset calculated using (27). The sequence of values COR(0)

to COR(2Z–1) provides a correlation profile, CPRO F,where the position of the peaks indicates

the offset where the highest correlation between S_A LIG N and SC O R R occurs, thus

identifying the position of the OOFDM symbol.A positive (negative) peak will occur in CPRO F

when SC O R Rand S_A LIG N are in phase (in opposite phase) both of which indicate symbol

alignment as SC O R R and SA LIG N have a period of 2•TS . By taking |COR(ν)|, only positive

peaks then occur in CPRO F and it is only necessary to select Z samples in every 2•Z samples to

ensure a peak is detected. Fig. 20 shows the idealvariation of |COR(ν)| against offset ν for an

arbitrary symbol alignment offset of w0.The addition of the dc offset level is performed in the

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 51

Page 52: UNIT 1

signed to unsigned block in Fig. 13. The added dc offset is online adjustable to allow

optimization. In the experimental demonstrationit was shown that a dc offset as small as

±1quantization level was sufficient and resulted in no reduction in system BER performance.A

block diagram of the implemented symbol offset detection function is shown in Fig. 21. The sum

and accumulate block consists of 40 parallel accumulators, corresponding to the 40 samples per

symbol period, to generate a new SSU M every10 000 symbols as M = 5000. As each

accumulator sums a total of 5000 8 bit samples, between resets, scaling is used to limit the

accumulator outputs to 12 bits. 40 parallel cross correlators are employed to generate the

correlation profile. A peak detectordetects the position of the correlation profile peak to

determine the symbol offset value.It should be noted that as the correlation signal SC O R Rhas

values of ±1, the cross-correlation function consists of 40add/subtract operators each with 40

inputs. The offset of the corresponding SC O R R value determining if a sample is added or

subtracted.

The use of multipliers is thus avoided to reduce design complexity. Also, it should be noted that

although multiple parallel cross-correlators were employed, it would be possible to significantly

reduce logic resources by implementing the function with a single crosscorrelator, and

sequentially increment the offset of the correlation signal to build up the correlation profile one

value at a time.As previously discussed, the symbol synchronization technique is designed for

application in OOFDMA-PONs to achievedownstream and upstream alignment of symbols,

timeslots and frames. Here, the mechanism for achieving synchronization of an ONU in a live

PON is described. For downstream symbol alignment, the optical line terminal (OLT)

continuously transmits a synchronization signal which all ONUs use for symbol alignment. In

the upstream direction, the OLT controls the synchronization process, allowing only one ONU to

transmit a synchronization signal at any one time. For each ONU, the OLT detects its upstream

symbol offset and then notifies the ONU via a control channel, so that it can correctly realign its

symbol positions. The ONU thus contains a symbol offset detection function in its receiver and a

symbol offset adjustment function in both its transmitter and receiver. The OLT only requires the

symbol offset detection function in its receiver.By constructing the synchronization signal from a

coded sequence of dc offsets, enhancements can be made that offer a number of key features. For

upstream and downstream frame/timeslot alignment, a suitably coded synchronization sig-nal

with the same length as one or more OOFDMA frames allows the OLT to detect an ONU’s

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 52

Page 53: UNIT 1

frame alignment offset by performing a cross-correlation over a period equivalent to one or more

coded sequence lengths.

Fig. 13 OOFDM signal combination with dc Fig 14.ideal correlation profile

offset symbol alignment signal (alignment signal dc

levels are exaggerated for clarity

ONU frame alignment offset is detected and then corrected in the ONU, such that each ONU is

timeslot aligned to the network before initiating OOFDMA signal transmission and hence

avoiding any upstream ONU signal collisions in the operational network. As symbol offset can

drift slowly over time, the OLT must periodically track and correct any symbol offset drift for

each ONU in turn.Furthermore, coding the downstream synchronization signal with a

sufficiently long encrypted key code will make it virtually impossible for an unauthorized user to

achieve synchronization, thus achieving network security at the physical layer..An alternative

symbol synchronization technique employing a subtraction-based correlation method for cyclic

prefix location has also been implemented and fully verified in the real-time transceiver.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 53

Page 54: UNIT 1

Fig. 15. Symbol offset detection block diagram

3.3.5.Clock Synchronization

Accurate synchronization of the OOFDM receiver and transmitter sampling clocks is essential to

minimize their sampling frequency offset (SFO) [67]. SFO induces interchannelinterference

(ICI) which produces increasing received signal distortion with increasing SFO. ICI results from

the loss of subcarrier orthogonality due to the mismatch between the discrete subcarrier

frequencies in the receiver compared to those in the transmitter. SFO also induces a drift in

symbol alignment necessitating periodic symbol realignment. Due to the noise-like nature of the

OOFDM signal clock recovery is not straightforward. However, asynchronous (nonzero SFO)

and synchronous (zero SFO) clocking techniques in real- time OOFDM transmissions have been

demonstrated. The CP detection-based symbol alignment method [66] supports asynchronous

clocking. The technique is able to compensate for SFO as it continuously readjusts the symbol

alignment and so prevents the accumulation of excessive STO.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 54

Page 55: UNIT 1

3.4.MULTIBAND OOFDM-BASED PONS FOR IMPROVED COST

EFFECTIVENESS

As the cost of the ONU is critical in an OOFDMA PON, it is important to avoid unnecessary

overengineering of the ONU. Employing single-band OOFDM transceivers in a PON leads to the

undesirable scenario, where all ONUs support the full peak PON capacity. In practice, however,

an ONU will only ever need to operate at a reduced peak capacity. If a multiband OOFDM

approach is adopted this can overcome the drawbacks of the single-band approach. Fig. 22

illustrates the multiband OFDM signal generation principle. Each OFDM transceiver generates a

baseband signal, which is then up-converted using a unique RF carrier frequency such that the

generated subbands do not overlap in the frequency domain. A frequency division multiplexing

(FDM) method is thus adopted to combine the OOFDM subbands together. In the downstream

direction, the OOFDM subbands are electrically summed in the OLT before EO conversion,

whereas for the upstream direction, the summation occurs in the optical domain in the optical

coupler in the PON’s remote node.

There are many advantages associated with the multiband OOFDM technique particularly when

considering the ONU implementation. It offers the key advantage of flexibility in adopted

DAC/ADC bandwidth as this is no longer dictated by the total PON capacity. Moreover, ONU

signal processing complexity isreduced, as only one PON subband is processed. If the subband

transceiver is designed to support subbandtunability, it pro-vides increased network operation

efficiency in terms of both dynamic bandwidth provisioning and equipment logistics.

Furthermore, the reduced complexity leads to a reduction in ONU power consumption. Although

the OLT must support all sub-bands, the same tunablesubband transceiver electronics, as used in

the ONUs, can be employed. This provides the benefits of economies of scale in transceiver

manufacturing, and also al-lows a scalable OLT architecture, where capacity can expand in sub-

band capacity increments in line with service take up. It would also be possible to implement

dynamic traffic redistribution across subbands, so that when PON traffic levels permit,

transceivers can be powered down to reduce OLT energy consumption. As higher cost and

complexity can be tolerated at the OLT side of a multiband OOFDMA-PON, an alternative OLT

architecture employing wideband DACs and ADCs, for direct digital-to-RF conversion of all

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 55

Page 56: UNIT 1

subbands, is also conceivable.

An OOFDM transceiver, designed to support a single subband in a multi band system, will

additionally require, two RF mixers, a single tunable LO and RF filters to support the up-

conversion and down-conversion of the OFDM signal. The IMDD optical components will be

similar for the single-band and multiband transceivers; although the multiband approach will

require wider optical component bandwidths. This is because the optical band must encompass

all subbands to support dynamic subband tunability. Furthermore, double side-band subbands

and the inter-subband spacing will increase the required optical bandwidth.

It is important to compare the difference between subband generation using a single carrier, and

using two orthogonal carriers for IQ modulation. IQ modulated subbands have the advantage of

increasing spectral efficiency. This theoretically halves the subband bandwidth for a given data

capacity. As a consequence, optical component bandwidth requirements are significantly

reduced. For IQ modulation, DAC and ADC pairs are needed, as illustrated in Fig. 12, though

the bandwidths are now halved compared to a single carrier generated subbandof the same data

capacity.

Fig. 17.Multiband OFDM signal generation principle.

The savings in IFFT/FFT processing complexity and DAC/ADC bandwidths will now be

considered in detail for the case of an ONU that supports one subband using a single RF carrier.

For an IMDD system employing real-valued time domain signals, for each subband, the

relationship between the number of data-carrying subcarriers NS and the IFFT/FFT sizeN , is NS

= (N /2)–1. For an N point radix-2 decimation-in-time IFFT/FFT architecture, the number of

complex operations is N log2 N complex additions and (N/2)log2 N complex multiplications, as

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 56

Page 57: UNIT 1

described in Section V-B. If the total capacity and thus the number of subcarriers in a PON is

fixed, then the effect of employing multiple bands is to reduce the number of sub-carriers per

subband and also reduce the required bandwidth of each subband. If each ONU supports one

subband, the required IFFT/FFT complexity and DAC/ADC bandwidth for each ONU will

reduce as the number of subbands increases. Table III and Fig. 23 illustrate this relationship for a

PON with a total of at least 500 subcarriers.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 57

Page 58: UNIT 1

UNIT-4

CONCEPTS OF DSP AND OC

4.1.DSP

Digital signal processing (DSP) is the mathematical manipulation of an information signal to

modify or improve it in some way. It is characterized by the representation of discrete time,

discrete frequency, or other discrete domain signals by a sequence of numbers or symbols and

the processing of these signals.

The goal of DSP is usually to measure, filter and/or compress continuous real-world analog

signals. Usually, the first step is conversion of the signal from an analog to a digital form,

by sampling and then digitizing it using an analog-to-digital converter (ADC), which turns the

analog signal into a stream of discrete digital values. Often, however, the required output signal

is also analog, which requires a digital-to-analog converter (DAC). Even if this process is more

complex than analog processing and has a discrete value range, the application of computational

power to signal processing allows for many advantages over analog processing in many

applications, such as error detection and correction in transmission as well as data compression.[1]

Digital signal processing and analog signal processing are subfields of signal processing. DSP

applications include audio and speech signal processing, sonar and radar signal processing,

sensor array processing, spectral estimation, statistical signal processing, digital image

processing, signal processing for communications, control of systems, biomedical signal

processing, seismic data processing, among others. DSP algorithms have long been run on

standard computers, as well as on specialized processors calleddigital signal processors, and on

purpose-built hardware such as application-specific integrated circuit (ASICs). Currently, there

are additional technologies used for digital signal processing including more powerful general

purpose microprocessors, field-programmable gate arrays (FPGAs), digital signal

controllers (mostly for industrial applications such as motor control), and stream processors,

among others.[2]

Digital signal processing can involve linear or nonlinear operations. Nonlinear signal processing

is closely related to nonlinear system identification[3] and can be implemented in the time,

frequency, and spatio-temporal domains.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 58

Page 59: UNIT 1

4.1.1.Signal sampling

The increasing use of computers has resulted in the increased use of, and need for, digital signal

processing. To digitally analyze and manipulate an analog signal, it must be digitized with an

analog-to-digital converter. Sampling is usually carried out in two

stages, discretization and quantization. In the discretization stage, the space of signals is

partitioned into equivalence classes and quantization is carried out by replacing the signal with

representative signal of the corresponding equivalence class. In the quantization stage, the

representative signal values are approximated by values from a finite set.

The Nyquist–Shannon sampling theorem states that a signal can be exactly reconstructed from its

samples if the sampling frequency is greater than twice the highest frequency of the signal, but

this requires an infinite number of samples. In practice, the sampling frequency is often

significantly higher than twice that required by the signal's limited bandwidth.

Some (continuous-time) periodic signals become non-periodic after sampling, and some non-

periodic signals become periodic after sampling. In general, for a periodic signal with period T to

be periodic (with period N) after sampling with sampling interval Ts, the following must be

satisfied:

where k is an integer.[4]

4.1.2.DSP domains

In DSP, engineers usually study digital signals in one of the following domains: time

domain (one-dimensional signals), spatial domain (multidimensional signals), frequency domain,

and wavelet domains. They choose the domain in which to process a signal by making an

informed assumption (or by trying different possibilities) as to which domain best represents the

essential characteristics of the signal. A sequence of samples from a measuring device produces

a temporal or spatial domain representation, whereas adiscrete Fourier transform produces the

frequency domain information, that is, the frequency spectrum. Autocorrelation is defined as

the cross-correlation of the signal with itself over varying intervals of time or space.

4.1.3.Time and space domains

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 59

Page 60: UNIT 1

The most common processing approach in the time or space domain is enhancement of the input

signal through a method called filtering. Digital filtering generally consists of some linear

transformation of a number of surrounding samples around the current sample of the input or

output signal. There are various ways to characterize filters; for example:

A "linear" filter is a linear transformation of input samples; other filters are "non-linear".

Linear filters satisfy the superposition condition, i.e. if an input is a weighted linear

combination of different signals, the output is an equally weighted linear combination of

the corresponding output signals.

A "causal" filter uses only previous samples of the input or output signals; while a "non-

causal" filter uses future input samples. A non-causal filter can usually be changed into a

causal filter by adding a delay to it.

A "time-invariant" filter has constant properties over time; other filters such as adaptive

filters change in time.

A "stable" filter produces an output that converges to a constant value with time, or

remains bounded within a finite interval. An "unstable" filter can produce an output that

grows without bounds, with bounded or even zero input.

A "finite impulse response" (FIR) filter uses only the input signals, while an "infinite

impulse response" filter (IIR) uses both the input signal and previous samples of the

output signal. FIR filters are always stable, while IIR filters may be unstable.

A filter can be represented by a block diagram, which can then be used to derive a sample

processing algorithm to implement the filter with hardware instructions. A filter may also be

described as a difference equation, a collection of zeroes and poles or, if it is an FIR filter,

an impulse response or step response.

The output of a linear digital filter to any given input may be calculated by convolving the

input signal with the impulse response.

4.1.4.Frequency domain

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 60

Page 61: UNIT 1

Signals are converted from time or space domain to the frequency domain usually through

the Fourier transform. The Fourier transform converts the signal information to a magnitude and

phase component of each frequency. Often the Fourier transform is converted to the power

spectrum, which is the magnitude of each frequency component squared.The most common

purpose for analysis of signals in the frequency domain is analysis of signal properties. The

engineer can study the spectrum to determine which frequencies are present in the input signal

and which are missing.In addition to frequency information, phase information is often needed.

This can be obtained from the Fourier transform. With some applications, how the phase varies

with frequency can be a significant consideration.

Filtering, particularly in non-realtime work can also be achieved by converting to the frequency

domain, applying the filter and then converting back to the time domain. This is a fast, O(n log

n) operation, and can give essentially any filter shape including excellent approximations

to brickwall filters.There are some commonly used frequency domain transformations. For

example, the cepstrum converts a signal to the frequency domain through Fourier transform,

takes the logarithm, then applies another Fourier transform. This emphasizes the harmonic

structure of the original spectrum.Frequency domain analysis is also called spectrum- or spectral

analysis.

4.2.OPTICAL COMMUNICATION

Optical communication, also known as optical telecommunication, is communication at a

distance using light to carry information. It can be performed visually or by using electronic

devices. The earliest basic forms of optical communication date back several millennia, while the

earliest electrical device created to do so was the photophone, invented in 1880. An

optical communication system uses a transmitter, which encodes a message into an

optical signal, a channel, which carries the signal to its destination, and a receiver, which

reproduces the message from the received optical signal. When electronic equipment is not

employed the 'receiver' is a person visually observing and interpreting a signal, which may be

either simple (such as the presence of abeacon fire) or complex (such as lights using color codes

or flashed in a Morse code sequence).Free-space optical communication has been deployed in

space, while terrestrial forms are naturally limited by geography, weather and the availability of

light. This article provides a basic introduction to different forms of optical communication.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 61

Page 62: UNIT 1

4.2.1.Forms

Visual techniques such as smoke signals, beacon fires, hydraulic telegraphs, ship

flags and semaphore lines were the earliest forms of optical communication.[1][2][3][4] Hydraulic

telegraph semaphores date back to the 4th century BCE Greece. Distress flares are still used by

mariners in emergencies, while lighthouses and navigation lights are used to communicate

navigation hazards. The heliograph uses a mirror to reflect sunlight to a distant observer.[5] When

a signaler tilts the mirror to reflect sunlight, the distant observer sees flashes of light that can be

used to transmit a prearranged signaling code. Naval ships often use signal lamps and Morse

code in a similar way.

Aircraft pilots often use visual approach slope indicator (VASI) projected light systems to land

safely, especially at night. Military aircraft landing on an aircraft carrier use a similar system to

land correctly on a carrier deck. The coloured light system communicates the aircraft's height

relative to a standard landing glideslope. As well, airport control towersstill use Aldis lamps to

transmit instructions to aircraft whose radios have failed.

In the present day a variety of electronic systems optically transmit and receive

informationcarried by pulses of light. Fiber-optic communication cables are now employed to

send the great majority of the electronic data and long distance telephone calls that are not

conveyed by either radio, terrestrial microwave or satellite. Free-space optical

communications are also used every day in various applications.

Semaphore line

A replica of one ofChappe's semaphore towers(18th century).

A 'semaphore telegraph', also called a 'semaphore line', 'optical telegraph', 'shutter telegraph

chain', 'Chappe telegraph', or 'Napoleonic semaphore', is a system used for conveying

information by means of visual signals, using towers with pivoting arms or shutters, also known

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 62

Page 63: UNIT 1

as blades or paddles. Information is encoded by the position of the mechanical elements; it is

read when the shutter is in a fixed position.

Semaphore lines were a precursor of the electrical telegraph. They were far faster than post

riders for conveying a message over long distances, but far more expensive and less private than

the electrical telegraph lines which would later replace them. The maximum distance that a pair

of semaphore telegraph stations can bridge is limited by geography, weather and the availability

of light; thus, in practical use, most optical telegraphs used lines of relay stations to bridge longer

distances. Each relay station would also require its compliment of skilled operator-observers to

convey messages back and forth across the line.

The modern design of semaphores was first foreseen by the British polymath Robert Hooke, who

first gave a vivid and comprehensive outline of visual telegraphy in an 1684 submissionto

the Royal Society. His proposal (which was motivated by military concerns following the Battle

of Vienna the preceding year) was not put into practice during his lifetime. The first operational

optical semaphore line arrived in 1792, created by the French engineer Claude Chappe and his

brothers, who succeeded in covering France with a network of 556 stations stretching a total

distance of 4,800 kilometres (3,000 mi). It was used for military and national communications

until the 1850s.

UNIT-5

SOFTWARE TOOLS

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 63

Page 64: UNIT 1

GENERAL

MATLAB (matrix laboratory) is a numerical computing environment

and fourth-generation programming language. Developed by Math Works,

MATLAB allows matrix manipulations, plotting of functions and data,

implementation of algorithms, creation of user interfaces, and interfacing with

programs written in other languages, including C, C++, Java, and Fortran.

Although MATLAB is intended primarily for numerical computing, an

optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic

computing capabilities. An additional package, Simulink, adds graphical multi-

domain simulation and Model-Based Design for dynamic and embedded systems.

In 2004, MATLAB had around one million users across industry and

academia. MATLAB users come from various backgrounds

of engineering, science, and economics. MATLAB is widely used in academic and

research institutions as well as industrial enterprises.

MATLAB was first adopted by researchers and practitioners in control

engineering, Little's specialty, but quickly spread to many other domains. It is now

also used in education, in particular the teaching of linear algebra and numerical

analysis, and is popular amongst scientists involved in image processing. The

MATLAB application is built around the MATLAB language. The simplest way to

execute MATLAB code is to type it in the Command Window, which is one of the

elements of the MATLAB Desktop. When code is entered in the Command

Window, MATLAB can be used as an interactive mathematical shell. Sequences of

commands can be saved in a text file, typically using the MATLAB Editor, as

a script or encapsulated into a function, extending the commands available.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 64

Page 65: UNIT 1

MATLAB provides a number of features for documenting and sharing your

work. You can integrate your MATLAB code with other languages and

applications, and distribute your MATLAB algorithms and applications.

3.2 FEATURES OF MATLAB

High-level language for technical computing.

Development environment for managing code, files, and data.

Interactive tools for iterative exploration, design, and problem solving.

Mathematical functions for linear algebra, statistics, Fourier analysis,

filtering, optimization, and numerical integration.

2-D and 3-D graphics functions for visualizing data.

Tools for building custom graphical user interfaces.

Functions for integrating MATLAB based algorithms with external

applications and languages, such as C, C++, Fortran, Java™, COM, and

Microsoft Excel.

MATLAB is used in vast area, including signal and image processing,

communications, control design, test and measurement, financial modeling and

analysis, and computational. Add-on toolboxes (collections of special-purpose

MATLAB functions) extend the MATLAB environment to solve particular classes

of problems in these application areas.

MATLAB can be used on personal computers and powerful server

systems, including the Cheaha compute cluster. With the addition of the Parallel

Computing Toolbox, the language can be extended with parallel implementations

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 65

Page 66: UNIT 1

for common computational functions, including for-loop unrolling. Additionally

this toolbox supports offloading computationally intensive workloads

to Cheaha the campus compute cluster. MATLAB is one of a few languages in

which each variable is a matrix (broadly construed) and "knows" how big it is.

Moreover, the fundamental operators (e.g. addition, multiplication) are

programmed to deal with matrices when required. And the MATLAB environment

handles much of the bothersome housekeeping that makes all this possible. Since

so many of the procedures required for Macro-Investment Analysis involves

matrices, MATLAB proves to be an extremely efficient language for both

communication and implementation.

3.2.1 INTERFACING WITH OTHER LANGUAGES

MATLAB can call functions and subroutines written in the  C

programming language or FORTRAN. A wrapper function is created allowing

MATLAB data types to be passed and returned. The dynamically loadable object

files created by compiling such functions are termed "MEX-files"

(for MATLAB executable).

Libraries written in Java, ActiveX or .NET can be directly called from

MATLAB and many MATLAB libraries (for example XML or SQL support) are

implemented as wrappers around Java or ActiveX libraries. Calling MATLAB

from Java is more complicated, but can be done with MATLAB extension, which

is sold separately by Math Works, or using an undocumented mechanism called

JMI (Java-to-Mat lab Interface), which should not be confused with the unrelated

Java that is also called JMI.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 66

Page 67: UNIT 1

As alternatives to the MuPAD based Symbolic Math Toolbox available from

Math Works, MATLAB can be connected to Maple or Mathematica.

Libraries also exist to import and export MathML.

Development Environment

Startup Accelerator for faster MATLAB startup on Windows, especially

on Windows XP, and for network installations.

Spreadsheet Import Tool that provides more options for selecting and

loading mixed textual and numeric data.

Readability and navigation improvements to warning and error

messages in the MATLAB command window.

Automatic variable and function renaming in the MATLAB Editor.

Developing Algorithms and Applications

MATLAB provides a high-level language and development tools that

let you quickly develop and analyze your algorithms and applications.

The MATLAB Language

The MATLAB language supports the vector and matrix operations that are

fundamental to engineering and scientific problems. It enables fast development

and execution. With the MATLAB language, you can program and develop

algorithms faster than with traditional languages because you do not need to

perform low-level administrative tasks, such as declaring variables, specifying data

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 67

Page 68: UNIT 1

types, and allocating memory. In many cases, MATLAB eliminates the need for

‘for’ loops. As a result, one line of MATLAB code can often replace several lines

of C or C++ code.

At the same time, MATLAB provides all the features of a traditional

programming language, including arithmetic operators, flow control, data

structures, data types, object-oriented programming (OOP), and debugging

features.

MATLAB lets you execute commands or groups of commands one at a

time, without compiling and linking, enabling you to quickly iterate to the optimal

solution. For fast execution of heavy matrix and vector computations, MATLAB

uses processor-optimized libraries. For general-purpose scalar computations,

MATLAB generates machine-code instructions using its JIT (Just-In-Time)

compilation technology.

This technology, which is available on most platforms, provides execution

speeds that rival those of traditional programming languages.

Development Tools

MATLAB includes development tools that help you implement your

algorithm efficiently. These include the following:

MATLAB Editor 

Provides standard editing and debugging features, such as setting

breakpoints and single stepping

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 68

Page 69: UNIT 1

Code Analyzer 

Checks your code for problems and recommends modifications to

maximize performance and maintainability

MATLAB Profiler 

Records the time spent executing each line of code

Directory Reports 

Scan all the files in a directory and report on code efficiency, file

differences, file dependencies, and code coverage

Designing Graphical User Interfaces

By using the interactive tool GUIDE (Graphical User Interface

Development Environment) to layout, design, and edit user interfaces. GUIDE lets

you include list boxes, pull-down menus, push buttons, radio buttons, and sliders,

as well as MATLAB plots and Microsoft ActiveX® controls. Alternatively, you can

create GUIs programmatically using MATLAB functions.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 69

Page 70: UNIT 1

3.2.2 ANALYZING AND ACCESSING DATA

MATLAB supports the entire data analysis process, from acquiring data

from external devices and databases, through preprocessing, visualization, and

numerical analysis, to producing presentation-quality output.

Data Analysis

MATLAB provides interactive tools and command-line functions for data

analysis operations, including:

Interpolating and decimating

Extracting sections of data, scaling, and averaging

Thresholding and smoothing

Correlation, Fourier analysis, and filtering

1-D peak, valley, and zero finding

Basic statistics and curve fitting

Matrix analysis

Data Access

MATLAB is an efficient platform for accessing data from files, other

applications, databases, and external devices. You can read data from popular file

formats, such as Microsoft Excel; ASCII text or binary files; image, sound, and

video files; and scientific files, such as HDF and HDF5. Low-level binary file I/O

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 70

Page 71: UNIT 1

functions let you work with data files in any format. Additional functions let you

read data from Web pages and XML.

Visualizing Data

All the graphics features that are required to visualize engineering and

scientific data are available in MATLAB. These include 2-D and 3-D plotting

functions, 3-D volume visualization functions, tools for interactively creating plots,

and the ability to export results to all popular graphics formats. You can customize

plots by adding multiple axes; changing line colors and markers; adding

annotation, Latex equations, and legends; and drawing shapes.

2-D Plotting

Visualizing vectors of data with 2-D plotting functions that create:

Line, area, bar, and pie charts.

Direction and velocity plots.

Histograms.

Polygons and surfaces.

Scatter/bubble plots.

Animations.

3-D Plotting and Volume Visualization

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 71

Page 72: UNIT 1

MATLAB provides functions for visualizing 2-D matrices, 3-D

scalar, and 3-D vector data. You can use these functions to visualize and

understand large, often complex, multidimensional data. Specifying plot

characteristics, such as camera viewing angle, perspective, lighting effect, light

source locations, and transparency.

3-D plotting functions include:

Surface, contour, and mesh.

Image plots.

Cone, slice, stream, and isosurface.

3.2.3 PERFORMING NUMERIC COMPUTATION

MATLAB contains mathematical, statistical, and engineering functions to

support all common engineering and science operations. These functions,

developed by experts in mathematics, are the foundation of the MATLAB

language. The core math functions use the LAPACK and BLAS linear algebra

subroutine libraries and the FFTW Discrete Fourier Transform library. Because

these processor-dependent libraries are optimized to the different platforms that

MATLAB supports, they execute faster than the equivalent C or C++ code.

MATLAB provides the following types of functions for performing

mathematical operations and analyzing data:

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 72

Page 73: UNIT 1

Matrix manipulation and linear algebra.

Polynomials and interpolation.

Fourier analysis and filtering.

Data analysis and statistics.

Optimization and numerical integration.

Ordinary differential equations (ODEs).

Partial differential equations (PDEs).

Sparse matrix operations.

MATLAB can perform arithmetic on a wide range of data types,

including doubles, singles, and integers.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 73

Page 74: UNIT 1

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 74

Page 75: UNIT 1

UNIT-6

SIMULATION RESULTS AND OUTPUTS

Model dispersion in fibre

600 650 700 750 800 850 900 950 1000

1.45

1.452

1.454

1.456

1.458

1.46

1.462

1.464

n eff

Fundamental wavelength, nm

Modal dispersion in a fibre

Model dispersion on nanofibre

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 75

Page 76: UNIT 1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

1

1.05

1.1

1.15

1.2

1.25

1.3

1.35

n eff

Diameter, m

Modal dispersion in a nanofibre

Performance evaluation

0 5 10 15 20 25 3010

-4

10-3

10-2

10-1

100

SNR

BE

R

Simulation, Ncp=8

Analysis

Performance comparision

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 76

Page 77: UNIT 1

0 5 10 15 20 25 3010

-4

10-3

10-2

10-1

100

SNR

BE

R

OFDM

Optical OFDM

UNIT-7

CONCLUSION

An overview of the implementation aspects associated with DSP-based optical

transceivers for future access networks by examining the optical transceiver structure and the key

transceiver constituent elements. This paper focuses on DSP functionality and architecture of

OOFDM-based optical transceivers. the high equipment volumes associated with optical access

networks can inevitably lead to cost-effective electronics. OOFDM is one of the leading DSP-

based optical access technologies which is perceived by many as one of the main contenders for

future optical access networks due to its potential for high cost-effectiveness, data capacity per

wavelength far beyond 10 Gb/s, adaptiveness to varying network characteristics, and flexibility

in terms of bandwidth allocation. Given the exponentially growing demand for data capacity and

the operators’ need for flexible cost-efficient access networks, it is believed that DSP-based

optical access networks will emerge in the future.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 77

Page 78: UNIT 1

UNIT-8

REFERENCES

[1] Z. Kostic and S. Seefharaman, “Digital signal processors in cellular radio

communications,” IEEE Commun. Mag., vol. 35, no. 12, pp. 22–35, Dec. 1997.

[2] S. Chaturvedi, “The role of digital signal processors (DSP) for 3 G mobile

communication systems,” Int. J. Emerging Technol., vol. 1, no. 1, pp. 23– 26, Feb. 2010.

[3] A. Gatherer, T. Stetzler, M. McMahan, and E. Auslander, “DSP-based ar-chitectures for

mobile communications: Past, present, and future,”. (2000, Jan.). IEEE Commun. Mag., vol. 38,

pp. 84–90, 2000.

[4] “Coherent DWDM technologies.” (2012). White Paper Infinera. [On-line] Available:

www.infinera.com/pdfs/whitepapers/Infinera_Coherent_ Tech.pdf

[5] T. Koonen, “Trends in optical access and in-building networks,” in Eur. Conf. Opt.

Commun., Brussels, Belgium, 2008, pp. 1–31.

[6] P. Vetter, “Next generation optical access technologies,” presented at Euro-pean Conf.

Optical Communication, Amsterdam, The Netherlands, 2012.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 78

Page 79: UNIT 1

[7] Y. Luo, X. Zhou, F. Effenberger, X. Yan, G. Peng, Y. Qian, and Y. Ma, “Time- and

wavelength-division multiplexed passive optical network (TWDM-PON) for next-generation

PON stage 2 (NG-PON2),” J. Lightw. Technol., vol. 31, no. 4, pp. 587–593, Feb. 2013.

[8] G. Engel, D. E. Fague, and A. Toledano, “RF digital-to-analog converters enable direct

synthesis of communications signals,” IEEE Commun. Mag., vol. 50, no. 10, pp. 108–116, Oct.

2012.

[9] C. Cox, E. Ackerman, R. Helkey, and G. E. Betts, “Techniques and per-formance of

intensity-modulation direct-detection analog optical links,”

[10] A. Villafranca, J. Lasobras, and I. Garces,´ “Precise characterization of the frequency

chirp in directly modulated DFB lasers,” in Spanish Conf. Electron. Devices, 2007, pp. 173–176.

[11] M. Muller¨ and M. C. Amann, “State-of-the-art and perspectives for long-wavelength

high speed VCSELs,” in 13th Int. Conf. Transp. Opt. Netw., Stockholm, Sweden, 2011, pp. 1–4,

Paper Mo.C5.2.

[12] C. H. Yeh and C. W. Chow, “10-Gb/s upstream TDM-PON based on four WDM signals

with OFDM-QAM remodulation,” in 13th OptoElectron. Commun. Conf., Hong Kong, 2009, pp.

1–2, Paper ThLP46.

[13] M. F. Huang, D. Qian, and N. Cvijetic, “A novel symmetric lightwavecen-tralized

WDM-OFDM-PON architecture with OFDM-remodulated ONUs and a coherent receiver OLT,”

presented at Europeon Conf. Optical Com-munication, Geneva, Switzerland, 2011, Paper

Tu.5.C.1.

[14] N. Cvijetic, “OFDM for next-generation optical access networks,” J. Lightw. Technol.,

vol. 30, no. 4, pp. 384–398, Feb. 2012.

[15] J. L. Wei, J. D. Ingham, D. G. Cunningham, R. V. Penty, and I. H. White, “Performance

and power dissipation comparisons between 28 Gb/s NRZ, PAM, CAP and optical OFDM

systems for data communication applica-tions,” J. Lightw. Technol., vol. 30, no. 20, pp. 3273–

3280, Oct. 2012.

DEPARTMENT OF ECE,BCETFW,KADAPA. Page 79