Wednesday, June 05, 2013

Macroscopes: Models for collective decision making

Ramamoorthy, S., Salamon, A. Z., & Santhanam, R. (2012).
arXiv preprint arXiv:1204.3860.
This paper is by colleagues in the University of Edinburgh School of Informatics.  I heard about some aspects of it at a recent workshop on computer-mediated social sense-making.

The paper describes some motivating applications where decisions need to be made by multiple agents with access to different subsets of the needed information.  Examples include electronic markets, sensor networks, and program committees (each PC member reviews a subset of the papers).

The paper reviews communication complexity, where (roughly) the idea is for multiple agents to compute some common function while minimizing the amount of communication needed.  Classical communication complexity focuses on uniform situations where $f : \{0,1\}^{kn} \to \{0,1\}$ and there are $k$ agents, each of which have access to $n$ different contiguous bits (and implicitly, each agent knows which bits the other agents know, but not their values).

In this paper the authors consider variations of this situation: first, they consider arbitrary distributions of data to agents, and they consider single-blind vs. double-blind situations.  Single-blind means that agents know what information the other agents have, but not its content, while double blind means they only know what information they have.  (These terms do not necessarily mean anything related to what they would mean in, say, the PC context.)

The paper goes on to investigate lower and upper bounds for several specific problems:
1. Are all of the bits constant?
2. Is the input a boolean step function (i.e., matching $0^* 1^*$)?
3. What is the average (within $\epsilon$) of $N$ real values?
Interestingly, there are significant differences between the single- and double-blind scenarios, showing that meta-information about the

The proofs and presentation is clear and thorough.  I think the paper may be a good starting point for a line of work on understanding the impact of implicit or background knowledge on collective decision making.  I'm particularly interested in the possible connection to provenance, since provenance is often motivated as providing context that could inform decision-making but (to my knowledge) there is no clear explanation of what this really means, or how the information actually improves decision-making.

Stray observations:
• it would be interesting to model the program committee problem.  Doing so would require an (unrealistic?) assumption that there is a well-defined function $f$ that takes all of the papers and sorts them by quality.  One could then try to synthesize a total or partial order of the papers.
• One can view the "identify the champion" pattern as an example of an attempt to minimize the communication complexity of the PC selection process.
• Some PCs have experimented with variations on the usual scoring or championing schemes for deciding which papers to accept/discuss.  For example, ICFP 2012 gave each PC member a fixed "budget" to spend on papers they liked.