Communication over unreliable, interfering networks is one of the current challenges in engineering. For point-to-point channels, Shannon established capacity results in 1948, and it took more than forty years to find coded systems approaching the capacity limit with feasible complexity. Significant research efforts have gone into extending Shannons capacity results to networks with many partial successes. By contrast, the development of codes of reasonable complexity for networks (codes implemented by modern signal processing circuits) has received limited attention to date.
Using both algebraic structure and probability theory, I will present main ideas about the following low-complexity codes weve developed for certain classes of networks: i) A broadcast code which achieves rates on the capacity boundary of several classes of multi-user broadcast channels. The code is based on Arikans polarization theory of random variables and provides insight into information-theoretic concepts such as random binning, superposition coding, and Martons construction. Experimental evidence and simulations are provided; ii) A network code which achieves the computing capacity for a countably infinite class of simple noiseless interfering networks. The code separates a network into constituent irreducible networks over which a function alignment strategy is applied.
In my dissertation thesis, I also discuss low-complexity transmission of sources across noisy networks based on dimensionality reduction and convex optimization. Sometimes simple un-coded strategies achieve a performance which is exactly optimal, or close to optimal in the low signal-to-noise regime applicable for sensor networks.