Jameson Quinn
2017-07-14 12:17:45 UTC
I've made two voting method comparison tables
<https://docs.google.com/spreadsheets/d/1bNu4eFc1DC-IzJQt9qbyGVay85l5vwTmehXWXiZNVE4/edit#gid=90844263>
â for multi-winner and single-winner methods. Unlike the "comparison of
electoral systems
<https://en.wikipedia.org/wiki/Comparison_of_electoral_systems>" table on
wikipedia, these are meant to focus on political practice more than theory.
Thus, in terms of methods, I leave out some possibilities if they're
redundant (eg, only one Condorcet method), overcomplicated, or unlikely to
be used in politics (eg, Borda). And in terms of the aspects I compare
methods on, I try to include practical considerations rather than just
abstract criteria. For instance, "simplicity" is one aspect, and instead of
"later no harm" I have "chicken dilemma".
There are 4 tabs in the sheet: emoticon and numeric versions of the table
for multi-winner and single-winner methods. In a few places the emoticons
and the numbers don't exactly correspond; I consider the numbers to be the
latest version. The methods in the left section are the ones I think are
discussed as reform proposals the most; the ones on the right are
interesting but IMO less-likely to be implemented in English-speaking
countries. In between the two sections is a column which briefly explains
what I meant by the aspect.
If you consider the various aspects as voters and the methods as
candidates, the winning methods (under basically any method used as the
"meta method") are 3-2-1 for single-winner, and GOLD for multi-winner. It
is, of course, not a coincidence that a table I made ends up favoring two
methods I've designed. But I don't think this is because the table is
biased; I think my ratings are pretty much fair and objective. Rather, it's
because the aspects on this table are the aspects I care about, and so when
I designed those two methods, I deliberately optimized them on these
aspects. In other words, it's the methods which are biased to actually *be*
good, not the table which is biased to falsely rate them as good.
Of course, plurality/FPTP is the loser on both tables. Another thing worth
noting is how poorly IRV does among single-winner methods. As compared to
FPTP, it gives just 1/6 of the benefits that 3-2-1 would. I find that ratio
plausible.
Still, I understand that other people here will view this table with some
skepticism, and will have plenty of points to debate. I welcome that
discussion; that's why I'm posting it here.
<https://docs.google.com/spreadsheets/d/1bNu4eFc1DC-IzJQt9qbyGVay85l5vwTmehXWXiZNVE4/edit#gid=90844263>
â for multi-winner and single-winner methods. Unlike the "comparison of
electoral systems
<https://en.wikipedia.org/wiki/Comparison_of_electoral_systems>" table on
wikipedia, these are meant to focus on political practice more than theory.
Thus, in terms of methods, I leave out some possibilities if they're
redundant (eg, only one Condorcet method), overcomplicated, or unlikely to
be used in politics (eg, Borda). And in terms of the aspects I compare
methods on, I try to include practical considerations rather than just
abstract criteria. For instance, "simplicity" is one aspect, and instead of
"later no harm" I have "chicken dilemma".
There are 4 tabs in the sheet: emoticon and numeric versions of the table
for multi-winner and single-winner methods. In a few places the emoticons
and the numbers don't exactly correspond; I consider the numbers to be the
latest version. The methods in the left section are the ones I think are
discussed as reform proposals the most; the ones on the right are
interesting but IMO less-likely to be implemented in English-speaking
countries. In between the two sections is a column which briefly explains
what I meant by the aspect.
If you consider the various aspects as voters and the methods as
candidates, the winning methods (under basically any method used as the
"meta method") are 3-2-1 for single-winner, and GOLD for multi-winner. It
is, of course, not a coincidence that a table I made ends up favoring two
methods I've designed. But I don't think this is because the table is
biased; I think my ratings are pretty much fair and objective. Rather, it's
because the aspects on this table are the aspects I care about, and so when
I designed those two methods, I deliberately optimized them on these
aspects. In other words, it's the methods which are biased to actually *be*
good, not the table which is biased to falsely rate them as good.
Of course, plurality/FPTP is the loser on both tables. Another thing worth
noting is how poorly IRV does among single-winner methods. As compared to
FPTP, it gives just 1/6 of the benefits that 3-2-1 would. I find that ratio
plausible.
Still, I understand that other people here will view this table with some
skepticism, and will have plenty of points to debate. I welcome that
discussion; that's why I'm posting it here.