Multi-Gigabit Reception with Time-interleaved Analog-to-Digital Converters
- Degree Grantor:
- University of California, Santa Barbara. Electrical & Computer Engineering
- Degree Supervisor:
- upamanyu Madhow
- Place of Publication:
- [Santa Barbara, Calif.]
- University of California, Santa Barbara
- Creation Date:
- Issued Date:
- Engineering, Electronics and Electrical
- Dissertations, Academic and Online resources
- Ph.D.--University of California, Santa Barbara, 2011
Moore's law drives the economies of scale in modern communication systems, with most receiver functionalities being implemented in digital signal processing (DSP) after analog to digital conversion. Extending the computational advantage provided by Moore's law to multi-Gigabit communication systems requires analog-to-digital converters (ADCs) with high sampling rate and output resolution.
A promising approach to realize such ADCs at reasonable power consumption is to employ a time- interleaved (TI) architecture with slower (but power-efficient) sub-ADCs in parallel. However, mismatch among the sub-ADCs, if left uncompensated, can cause error floors in receiver performance. Traditionally, mismatch is compensated either by employing larger transistors, by analog adjustments, or by dedicated digital mismatch compensation whose complexity increases with the desired resolution at the output of the TI-ADC. In this thesis, we investigate a novel approach, in which mismatch and channel dispersion are compensated jointly, with the performance metric being overall link reliability rather than ADC performance.
We first characterize the structure of mismatch-induced interference for an OFDM system, and demonstrate the efficacy of a frequency-domain interference suppression scheme whose complexity is independent of constellation size (which determines the desired resolution). Numerical results from computer simulation and from experiments on a hardware prototype show that the performance with the proposed joint mismatch and channel compensation technique is close to that without mismatch.
Next, we explore time-domain mismatch compensation approaches that scale with the number of sub-ADCs and the desired resolution. We show that low-complexity linear mismatch compensation is possible if we employ oversampling. We establish a strong analogy between the role of oversampling for mismatch compensation and for channel equalization, even though the structure of the interference due to mismatch is different from that due to a dispersive channel.
While the proposed compensation architectures work with offline estimates of mismatch parameters, we provide an iterative, online method for joint estimation of mismatch and channel parameters which leverages the training overhead already available in communication signals. We provide a closed form solution for each iteration, for both channel and mismatch estimates, based on a linear approximation for the nonlinear effect of timing mismatch. We investigate the scalability and convergence of this joint estimation algorithm, and derive rules of thumb relating the required length of pseudorandom training to the number of sub-ADCs. Further, we design periodic training sequences with significantly enhanced convergence rates.
- Physical Description:
- 1 online resource (141 pages)
- UCSB electronic theses and dissertations
- Catalog System Number:
- Sandeep Ponnuru, 2011
- In Copyright
- Copyright Holder:
- Sandeep Ponnuru
|Access: This item is restricted to on-campus access only. Please check our FAQs or contact UCSB Library staff if you need additional assistance.|