Manual audits have become an integral part of every automated election worldwide. Voters and stakeholders demand them from election commissions as a safety measure against possible machine errors. To many, the presence of manual audits imbues elections with an extra layer of credibility.
Yet despite the high level of trust attributed to manual audits, there hasn’t been any conclusive study proving that the activity is effective in guarding against electoral errors, unintentional or otherwise.
Quite the contrary, manual audits have been shown to incur in errors of up to 2 percent. According to a recent study conducted by Rice University titled “Post-Election Auditing: Effects of Election Procedure and Ballot Type on Manual Counting Accuracy, Efficiency and Auditor Satisfaction and Confidence” the “read-and-mark” method of auditing was found to have a one-half to 1 percent error rate, and the “sort-and-stack” method was shown to have up to a 2 percent error rate
The study argues that, “While many argue manual audits are the ‘‘gold standard’’ by which we must evaluate computerized ballot totals due to the insecure nature of such machines, we must be careful to remember that even the most basic tasks performed by humans can and do introduce error into the process.”
The study has caused a number of observers to re-evaluate their position on manual auditing. Whereas it used to enjoy an almost hallowed place in automated elections, some now see that it is an activity that could be tainted by human intervention.
This is perhaps what the real trouble with manual audits is. They are done by humans. Human beings are highly subjective. This makes human counts very unreliable and much prone to tiredness, boredom, bias, and inconsistency.
In elections using Optical Mark Readers, (OMR’s), for example, markings on the ballot may become too ambiguous to determine real voter intent. It is very easy for one human auditor to take a stray mark as a valid vote, or to regard as an invalid vote a mark that was intended to be a vote.
In the article “Even Careful Hand Counts Can Produce Election Errors” by Joel Shurkin, Daniel Castro, an expert on electronic voting and senior analyst at the Information Technology and Innovation Foundation in Washington said that that manual audit is “…a great example of how humans are generally not that good at repetitive tasks like counting, but computers are really good at that”. “I do not think we will ever have a hand counting system that outperforms a computer, especially if you factor in cost ” Castro added.
It is perhaps the summit of irony that a subjective count is being made to audit a count that was performed by a cold, impartial, and objective machine.