Loading AI tools
Social choice problem From Wikipedia, the free encyclopedia
Multi-issue voting is a setting in which several issues have to be decided by voting. Multi-issue voting raises several considerations, that are not relevant in single-issue voting.
The first consideration is attaining fairness both for the majority and for minorities. To illustrate, consider a group of friends who decide each evening whether to go to a movie or a restaurant. Suppose that 60% of the friends prefer movies and 40% prefer restaurants. In a one-time vote, the group will probably accept the majority preference and go to a movie. However, making the same decision again and again each day is unfair, since it satisfies 60% of the friends 100% of the time, while the other 40% are never satisfied. Considering this problem as multi-issue voting allows attain a fair sequence of decisions by going 60% of the evenings to a movie and 40% of the evenings to a restaurant. The study of fair multi-issue voting mechanisms is sometimes called fair public decision making.[1] The special case in which the different issues are decisions in different time-periods, and the number of time-periods is not known in advance, is called perpetual voting.[2][3][4]
The second consideration is the potential dependence between the different issues. For example, suppose the issues are two suggestions for funding public projects. A voter may support funding each project on its own, but object to funding both projects simultaneously, due to its negative influence on the city budget. If there are only few issues, it is possible to ask each voter to rank all possible combinations of candidates. However, the number of combinations increases exponentially in the number of issues, so it is not practical when there are many issues. The study of this setting is sometimes called combinatorial voting.[5]
There are several issues to be decided on. For each issue t, there is a set Ct of candidates or alternatives to choose from. For each issue t, a single candidate from Ct should be elected. Voters may have different preferences regarding the candidates. The preferences can be numeric (cardinal ballots) or ranked (ordinal ballots) or binary (approval ballots). In combinatorial settings, voters may have preferences over combinations of candidates.
A multi-issue voting rule is a rule that takes the voters' preferences as an input, and returns the elected candidate for each issue. Multi-issue voting can take place offline or online:
With cardinal ballots, each voter assigns a numeric utility to each alternative in each round. The total utility of a voter is the sum of utilities he assigns to the elected candidates in each round.
Conitzer, Freeman and Shah[1] studied multi-issue voting with offline cardinal ballots (they introduced the term public decision making). They focus on fairness towards individual agents. A natural fairness requirement in this setting is proportional division, by which each agent should receive at least 1/n of their maximum utility. Since proportionality might not be attainable, they suggest three relaxations:
These relaxations make sense when the number of voters is small and the number of issues is large, so a difference of one issue is small w.r.t. 1/n. They show that the Maximum Nash Welfare solution (maximizing the product of all agents' utilities) satisfies or approximates all three relaxations. They also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without Pareto efficiency.
Freeman, Zahedi and Conitzer[7] study multi-issue voting with online cardinal ballots. They present two greedy algorithms that aim to maximize the long-term Nash welfare (product of all agents' utilities). They evaluate their algorithms on data gathered from a computer systems application.
The simplest multi-issue voting setting is that there is a set of issues, and each agent votes either for or against each issue (effectively, there is a single candidate in each round). Amanatidis, Barrot, Lang, Markakis and Ries[8] present several voting rules for this setting, based on the Hamming distance:
Barrot, Lang and Yokoo[9] study the manipulability of these OWA-based rules. They prove that the only strategyproof OWA rule with non-increasing weights is the utilitarian rule. They also study empirically a sub-family of the OWA-based rules. Their family is characterized by a parameter p, which represents a property called "orness" of the OWA rule. p=0.5 yields utilitarian AV, whereas p=1 yields egalitarian AV. They show empirically that increasing p results in a larger fraction of random profiles that can be manipulated by at least one voter.
Freeman, Kahng and Pennock[10] study multiwinner approval voting with a variable number of winners. In fact, they treat each candidate as a binary issue (yes/no), so their setting can be seen as multi-issue voting with one candidate per round. They adapt the justified representation concepts to this setting as follows:
Skowron and Gorecki[11] study a similar setting: multi-issue voting with offline approval voting, where in each round t there is a single candidate (a single yes/no decision). Their main fairness axiom is proportionality: each group of size k should be able to influence at least a fraction k/n of the decisions. This is in contrast to justified-representation axioms, which consider only cohesive groups. This difference is important, since empirical studies show that cohesive groups are rare.[12] Formally, they define two fairness notions, for voting without abstentions:
For voting with abstentions, the definitions must be adapted (since if all voters abstain in all issues, their utility will necessarily be 0): instead of m, the factor changes to the number of issues on which all group members do not abstain.
They study two rules:
Teh, Elkind and Neoh[13] study utilitarian welfare and egalitarian welfare optimization in public decision making with approval preferencers.
Brill, Markakis, Papasotiropoulos and Jannik Peters[14] extended the results of Skowron and Gorecki to issues with multiple candidates per round, and possible dependencies between the issues; see below, the subsection on Fairness in combinatorial voting.
Page, Shapiro and Talmon[15] studied a special case in which the "issues" are cabinet offices. For each office, there is a set of candidates; all sets are pairwise disjoint. Each voter should vote for a single candidate per office. The goal is to elect a single minister per office. In contrast to the public decision-making setting,[1] here the number of voters is large and the number of issues is small. They present two generalizations of the justified representation property:
They generalize the setting by considering that different issues (offices) have different weight (importance, power). They consider both an objective power function, and subjective power functions. For an objective power function, they define a generalization of justified representation, which they call most important power allocation. They then present a greedy version of PAV, and show via simulations that it guarantees justified representation to minorities in many cases.
In online approval voting, it is common to assume that in each round t there are multiple candidates; the set of candidates is denoted by Ct. Each voter j approves a subset of At,j of Ct.
Martin Lackner[2] studied perpetual voting with online approval ballots. He defined the following concepts:
Based on these concepts, he defined three fairness axioms:
He also defines two quantitative properties:
He defined a class of perpetual voting rules, called weighted approval voting. Each voter is assigned a weight, which is usually initialized to 1. At each round, the candidate with the highest sum of approving weights is elected (breaking ties by a fixed predefined order). The weights of voters who approved the winning candidate are decreased, and the weights of other voters are increased. Several common weighting schemes are:
Maly and Lackner[3] discuss general classes of simple perpetual voting rules for online approval ballots, and analyze the axioms that can be satisfied by rules of each class. In particular, they discuss Perpetual Phragmen, Perpetual Quota and Perpetual Consensus.
Bulteau, Hazon, Page, Rosenfeld and Talmon[4] focus on fairness notions to groups of voters, rather than to individual voters. They adapt some justified representation properties to this setting. In particular, they define two variants of proportional justified representation (PJR). In both variants, we say that a group of agents agree in round t if there is at least one candidate in Ct that they all approve.
They prove that these axioms can be satisfied both in the static setting (where voters' preferences are the same in each round) and in the dynamic setting (where voters' preferences may change between rounds). They also report a human study for identifying what outcomes are considered desirable in the eyes of ordinary people.
Chandak, Goel and Peters[6] strengthen both axioms from PJR to EJR (the difference is that, in EJR, there must be at least L rounds in which the elected candidate is approved by the same member of S). They call their new axioms "EJR" and "strong-EJR". They also adapt three voting rules to this setting:
Bredereck, Fluschnik, and Kaczmarczyk[16] study perpetual multiwinner voting: at each round, each voter votes for a single candidate. The goal is to elect a committee of a given size. In addition, the difference between the new committee and the previous committee should be bounded: in the conservative model the difference is bounded from above (two consecutive committees should have a slight symmetric difference), and in the revolutionary model the difference is bounded from below (two successive committees should have a sizeable symmetric difference). Both models are NP-hard, even for a constant number of agents.
One complication in multi-issue voting is that there may be dependencies between agents' preferences on different issues. For example, suppose the issues to be decided on are different kinds of food that may be given in a meal. Suppose the bread can be either black or white, and the main dish can be either hummus or tahini. An agent may want either black bread with hummus or white bread with tahini, but not the other way around. This problem is called non-separability.
There are several approaches for eliciting voters' preferences when they are not separable:
A survey on voting in combinatorial domains is given by Lang and Xia, 2016.[26]
Brill, Markakis, Papasotiropoulos and Jannik Peters[14] study offline multi-issue voting with a non-binary domain, and possible dependencies between the issues, where the main goal is fair representation. They define generalizations of PAV and MES that handle conditional ballots; they call them conditional PAV and conditional MES. They prove that:
Lackner, Maly and Rey[27] extend the concept of perpetual voting to participatory budgeting. A city running PB every year may want to make sure that the outcomes are fair over time, not only in each individual application.
In fair allocation of indivisible public goods (FAIPG), society has to choose a set of indivisible public goods, where there is are feasibility constraints on what subsets of elements can be chosen. Fain, Munagala and Shah[28] focus on three types of constraints:
Fain, Munagala and Shah[28] present a fairness notion for FAIPG, based on the core. They provide polynomial-time algorithms finding an additive approximation to the core, with a tiny multiplicative loss. With matroid constraints, the additive approximation is 2. With matching constraints, there is a constant additive bound. With packing constraints, with mild restrictions, the additive approximation is logarithmic in the width of the polytope. The algorithms are based on the convex program for maximizing the Nash social welfare.
Garg, Kulkarni and Murhekar[29] study FAIPG with budget constraints. They show polynomial-time reductions for the solutions of maximum Nash welfare and leximin, between the models of private goods, public goods, and public decision making. They prove that Max Nash Welfare allocations are Prop1, RRS and Pareto-efficient. However, finding such allocations as well as leximin allocations is NP-hard even with constantly many agents, or binary valuations. They design pseudo-polynomial time algorithms for computing an exact MNW or leximin-optimal allocation for constantly many agents, and for constantly many goods with additive valuations. They alsao present an O(n)-factor approximation for max Nash welfare, which also satisfies RRS, Prop1, and 1/2-Prop.
Banerjee, Gkatzelis, Hossain, Jin, Micah and Shah[30] study FAIPG with predictions: in each round, a public good arrives, each agent reveals his value for the good, and the algorithm should decide how much to invest in the good (subject to a total budget constraint). There are approximate predictions of each agent's total value for all goods. The goal is to attain proportional fairness for groups. With binary valuations and unit budget, proportional fairness can be achieved without predictions. With general valuations and budget, predictions are necessary to achieve proportional fairness.
Multi-issue voting rules are prone to strategic manipulation. A particularly simple form of manipulation is the Free-rider problem: some voters may untruthfully oppose a popular opinion in one issue, in order to receive increased consideration in other issues. Lackner, Maly and Nardi[31] study this problem in detail. They show that:
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.