Compression and Dispersive Information Routing for Networks
- Degree Grantor:
- University of California, Santa Barbara. Electrical & Computer Engineering
- Degree Supervisor:
- Kenneth Rose
- Place of Publication:
- [Santa Barbara, Calif.]
- University of California, Santa Barbara
- Creation Date:
- Issued Date:
- Engineering, Electronics and Electrical and Engineering, Computer
- Source coding,
Large scale sensor networks,
Information theoretic bounds,
Distributed source channel coding, and
- Dissertations, Academic and Online resources
- Ph.D.--University of California, Santa Barbara, 2013
This dissertation focuses on problems related to compression, routing and communication of correlated sources across a network. The first part of the dissertation derives information theoretic bounds to characterize the performance limits of joint compression-routing systems. The second half addresses practical design issues related to large scale distributed quantization. The results in this dissertation can be grouped into four major categories, each described briefly in the following paragraphs.
The first problem is related to minimum cost joint compression and routing of correlated sources across a network. A new routing paradigm, called "dispersive information routing'' (DIR) is introduced, wherein the intermediate nodes are allowed to split a packet and forward different subsets of the received bits of each of the forward paths. Information theoretic bounds on the minimum cost achievable using DIR are derived and it is demonstrated using simple examples that DIR asymptotically outperforms conventional routing techniques, with the gains growing unboundedly with the network size, for certain networks. Also, a practical design strategy is proposed that results in low-delay dispersive information routers achieving significantly better performance compared to conventional routing techniques.
The second problem is concerned with multiple descriptions coding. The problem was originally posed as a method to cope with packet failures in network scenarios where the objective is to design the encoders (for each packet) and decoders (for each packet loss pattern) to achieve the best rate-distortion performance. Two new encoding principles are proposed for the L channel multiple descriptions problem. The first approach involves an encoding strategy called "combinatorial message sharing'', where a unique common codeword is sent within each subset of the descriptions. For a general class of source-distortion pairs (which includes a Gaussian source under mean squared error), the resulting achievable rate-distortion region strictly improves upon the most well known region for the problem. The second approach is based on an encoding principle called "subset typicality'' where it is assumed that a subset of the descriptions are never received simultaneously at the decoder. This allows the encoder to maintain joint typicality only within subsets of transmitted codewords and hence leads to a strictly improved achievable region.
A closely related problem to dispersive information routing is that of characterizing the common information (CI) of two dependent random variables. Although the information theoretic characterizations of the two most well known definitions of CI can be easily evaluated for random variables with infinite entropy (eg. continuous random variables), the intuition underlying their definitions do not generalize. In this work, single letter information theoretic characterizations of CI are derived for the lossy generalization of the Gray-Wyner network which allows us to extend the intuition underlying their definitions to continuous random variables. Close connections between lossy-CI and the problem of successive refinement are established and a new encoding mechanism is proposed for encoding sources that are not successively refinable.
The fourth problem addresses the design of distributed quantizers for large scale sensor networks operating in severe channel conditions. Two major obstacles have posed existential threats to practical deployment of conventional distributed coders in real world sensor networks, namely, the exponential growth in decoder complexity with the network size and transmit rates and the critical requirement of error/erasure resilience given the severe channel conditions in most practical sensor networks. In this work, a unified framework for the design of error/erasure resilient and complexity constrained distributed quantizers is proposed, where the decoder complexity is explicitly controlled during the design. Simulation results indicate substantial gains over other state-of-the-art techniques and provide strong evidence that the approach opens the door to practical deployment of distributed coding in real world sensor networks.
- Physical Description:
- 1 online resource (392 pages)
- UCSB electronic theses and dissertations
- Catalog System Number:
- Kumar Viswanatha, 2013
- In Copyright
- Copyright Holder:
- Kumar Viswanatha
|Access: This item is restricted to on-campus access only. Please check our FAQs or contact UCSB Library staff if you need additional assistance.|