The fundamental tradeoff between the rate at which data can be compressed and the fidelity to which it can be reproduced is known as rate distortion theory. Claude Shannon gave a precise characterization of this tradeoff in his 1959 paper when compression is performed by a single encoder. The multiterminal source coding problem considers the extension of this setting to multiple encoders. That is, if correlated sources are available at separate encoders, what is the tradeoff between the compression rate at each encoder and the fidelity to which each source can be reproduced? While this and related distributed compression problems remain largely open, their importance is keenly felt as the sheer quantity of data being collected often necessitates distributed processing.

In this talk, I will discuss compression under logarithmic loss. Although nonstandard, the choice of logarithmic loss is a natural one for systems which produce "soft decisions" (e.g. a web search returns a list of links indexed by relevance, rather than a single recommendation). By restricting our attention to measuring reproduction fidelity under logarithmic loss, we characterize the rate distortion tradeoff for the multiterminal source coding problem and the closely related CEO problem. Notably, we impose no restrictions on the source distribution, and therefore our results constitute the first complete solution to the multiterminal source coding and CEO problems for general sources (the sole exception is that of jointly Gaussian sources subject to mean-square error constraints). Time permitting, I will discuss connections between compression under logarithmic loss and the celebrated quadratic Gaussian multiterminal results.