What style of journalistic factchecking is most convincing to readers? This study uses an online survey experiment to compare two prevailing approaches to correcting both consumer and political misinformation: factchecks that rely only on written analysis to assess claims, and those that also deploy a graphical meter or "truth scale." Testing a series of simulated factchecks from a fictitious factchecking organization, GetTheFacts.org, we find first of all that both approaches were effective on the whole, with respondents who saw either format significantly more likely than a control group to correctly evaluate a claim that had been previously debunked. Does using a truth meter make a difference? In the case of a misleading advertising claim unrelated to politics, adding a meter to the written analysis appeared to make the correction more convincing. However, both formats proved equally effective in challenging political misinformation. Both formats also yielded their largest improvements among readers who selfidentified from the same party as the politician being checked. Although respondents scored best in identifying misinformation from a politician of the opposing party, seeing a correction made no significant difference in that case. Among other results, we also find that when given the choice, just over half of respondents preferred to see corrections that included a truth scale.